Choqok, the smallest model we could make to run on your devices

Jabir project has been started in late 2011 by Muhammadreza Haghiri as an attempt to democratize computing. The essential product was a Linux distribution and had Free/Libre & Open Source Software (FLOSS) goals. The operating system project went dark in 2016 due to some problems but in 2024, Jabir Project came back.

The comeback was with the slogan Let’s build LLMs Together although our flagship model Jabir 400B isn’t open source yet, we’ve decided to make smaller and easier to use models open source and public for public use.

Now, we’re pleased to announce the release of Choqok, one billion parameters model which can be run on pretty much any GPU and device you can think of.

How to access Choqok

In order to access Choqok, you can use our OpenAI compatible API or you also can use our newly released ChatUI (more details later in this article).

But if you have 2GB of GPU VRAM (or quantizations which are available here which can be used on CPU with Ollama and 2+ GB of RAM) and a basic knowledge of Python code. you can pay a visit to our HuggingFace page and start using the model programmatically.

Known problems

  1. It still has problems in generating Persian/Arabic text. Although it understands the Persian grammar.
  2. It has some problems in understanding its own name which is a result of the base model (LLaMa 3.2) and synthetic data we’ve used. If you used DeepSeek v3, you may have faced the same problem of the model saying it is made by open AI.

Our Chat User Interface

For the sake of simplicity, in addition to our API (which is completely compatible with OpenAI’s architecture (tested with vercel’s ai-sdk and Python’s openai library), we also released a Chat User Interface which can be used to test our models.

Now you can open it, without signing up and any concerns of your data being seen by a 3rd party, use the models with ease. Just open ChatUI, choose your model and start prompting!

For Businesses

If you own a business and need a specific model in order to be tuned or trained or if you’re using our API’s more than usual, you may leave a message at info@jabirproject.org and tell us about your needs.

Also since Jabir Project is being considered a non-profit organization helping other AI projects and open source developers, we’re open to any type of investment or donation from our user base.

Conclusion

Jabir project is currently improving prompts on Atelier AI‘s image generations, Used to summarizes the transcription of the videos on YouTubeLM and also will be used in Image/Video generation on Mann-E and also will be backbones of a new platform named Rapidens. This means we are already on our production phase and now it is time to go more and more on device.

On device models are important, because they guarantee the digital freedoms of computer users and while most of the FLOSS community – specially in Iran – is completely passive, we’re ready to take serious actions about real open AI.

The new era for Jabir Project is coming!

I, Muhammadreza Haghiri, the founder of Jabir Project, could build a strong base for myself in the Iranian AI ecosystem and this could grab the attention of a large group of my non-Iranian friends who wanted to help me build a nice LLM based project.

In past couple of weeks, I’ve been busy working on a product called AtelierAI, which is a personalization platform where you can train your own models based on your own input datasets of images. But at the same time, I’ve been involved in the Jabir Project as well.

I guess it is my pleasure to announce the changes we’re making in Jabir Project, and what you will get from us in the near future!

The new Jabir Models

In the first days of the release of Jabir Project, we’ve had only one model which was Jabir 400B which was a finetune on LLaMa 3.1 405B.

Now, we still continue working on that model, but we’re going to offer new models very soon!

Jabir 400B Online

Well, as the name suggests, this is a model which is connected to the internet and is able to search the web. We’re going to make it our flagship model since it will be an online model which can provide up to date information and has a very good understanding of multiple languages.

Now, I have to say this particular project is going to fulfill my teenage years dream of making a reliable and fast search engine.  Honestly the search engine behind the scene is MOA, which has been developed by a team of my friends.

This model will be available very soon for testing and making AI based products.

Jabir Evil

Okay, every time you tell the community Hey I made an LLM there will be this question Is it uncensored? and I personally guess it is basically a norm to make an uncensored version of your model as well.

We also developed a version of our model called “Evil” which has basically no ethical barrier and can answer everything you ask it. The development of this model is in progress and it will be released very soon!

Conclusion

We just moved from an operating system project to an AI project, and now we’re going to make AI project our flagship product. But since Alan Kay says Those who are really serious about the software should make their own hardware we may have plans to mix Jabir Project, Mann-E and other products for making a new era of AI equipped handheld devices.

This was just a thought, but we’re really serious about it and we’re going to make it happen. What do you think?

Let’s start over.

You’re probably looking for the ugly index page where you could sign up and get a free API access right? Well it’s history. The API is currently working for people who were signed up for it and now, we’re going to take a new angle on the whole “Jabir Project”.

First, I will tell you about the history of this project and then, we will be going to discuss the new angle we’re taking in the process of making our products a reality.

A little bit of history

It was around 2011, where I, Muhammadreza Haghiri, started Jabir Project. Back then, the goal of this project was very different. This is probably the best history lesson you’re taking.

I remember a year prior to 2011, I read about Professor Tanenbaum’s book on the subject of operating systems, design and implementation in a local magazine. It was a great article but it couldn’t get any better when I saw that they added a few paragraphs about the history of Linux at the end of the said article.

The Linux story made sparks in my head and I just wanted to do a similar thing. I don’t know how to explain, but my whole life began to change since that particular day in a hot summer in south of Iran. I have decided to make an operating system!

Anyway, although it was a great idea, I soon realized it is hard to code a whole operating system from scratch, so I have decided to make a Linux-based operating system.

Then I joined Linux forums and asked about how an operating system can be modified and repacked. I’m glad that I’ve always been a fast learner. I gathered information and tools and I started building!

I remember it was March 2011, released the very first version of JabirOS under this very same domain, and it got attention from different communities. Although a lot of people were questioning the existence of the project, it was a good start for me to learn more about Linux and operating systems in general.

It was in 2015 were I decided to shut JabirOS down, and I did. It was a few years of having no Jabir Project and honestly, the domain was taken by a Chinese company and I was really sad about not owning the domain.

One day, I checked that the domain is free to purchase, and without any hesitation, I bought the domain and decided to use it for my future projects. Now, this is what you see as Jabir Project. 

On-device models are the future of AI

The previous website you were witnessing, was an ugly landing page made using mvp.css in about two hours. And the model you’re probably using (and facing a lot of rate limits on) was a fine tune of LLaMa 3.1 405B. I was thinking to myself, wasn’t it a little bit of overkill? The wise sound inside me said “It was”.

And everything aside, I recently saw a lot of good models are small and can be executed easily on a normal end-user computer. The best example of this can be LLaMA 3.2 specially on 1B and 3B sizes. This is crazy to not have those models in mind when you’re going to make affordable AI for everyone. 

Honestly, I personally think a lot of these models will be on-device in the following years and the future of AI will be on-device. At least for text-based models.

Here, we will have a plan for making it more and more on-device, accessible and affordable for people who are willing to have a safe, secure and private AI system.

The Plan

  • Phase one, the API: We’re still going to provide an API for testing the performance of the model at different sizes and different hyper parameters. It is important to get a feedback from the community.
  • Phase two, the Data: Also, dataset is important. What we’re going to do is to provide openly accessible datasets for people who are willing to help with data sanitization and stuff. This will be the most important phase for this project. Because without data, AI models are just a bunch of mathematical functions stacked on top of each other and nothing more.
  • Phase three, the Finetune: Now we’re talking about the fun part, right? In this particular phase we’re going to work on small models which can be finetuned on the data we’ve gathered. Then the model weights will be uploaded for people who want to use it.
  • Phase four, a possible LLM Operating System: I prefer to keep it a surprise for you 🙂

Conclusion

If you think deeply for a moment, you will realize OpenAI services becoming cheaper and cheaper everyday because a lot of people are currently using FLOSS (Free/Libre and Open Source Software) equivalent of their so-called “open” artificial intelligence systems. Yes, I believe that. I believe having open source equivalents even effect closed-source and proprietary software this much.

So in conclusion I’m going to say that we will be needing tools which are self-hostabale, modifiable and redistributable by nature in the current AI ecosystem. And if you are ready, we’re going to make it a reality together.

Let's make LLMs together!