J1 is here to reason

In the past few months, OpenAI released o1, Qwen released a reasoning model and the new kid in town is DeepSeek’s R1. We also have decided to release a reasoning model, based on LLaMa 405B and it is called J1. (I personally think we need to be a little more creative in naming reasoning models).

Anyway, this model is now available and in this article, we’re going to break it down for you.

The Importance of Reasoning Models

In the world of Large Language Models it is important to build models which are capable of imitation of the human behavior and reasoning is one of the most important aspects of the human behavior.

Personally believing that reasoning models are our keys to a lot of things and AGI and also Large World Models can be two of those  things.

Currently we are working on J1 and it is working like a charm. It was an amazing moment for us to have this model working like this and never could imagine how great its outcome was.

Comparing with other models

As you can see, J1 has a performance pretty identical to R1 and in some cases, it was even better than R1 as well. We’re working hard in order to make it better in near future as well.

Accessing the model

The model is totally free through our API and Chatbox.

Future Plans

Adding vision capability

Most of reasoning models aren’t capable of understanding images. Since jabir-400b has this capability and we’re going to build some tools using our already existing models, we work hard to add this one to our reasoning model as well.

The current version doesn’t support vision but in near future, it will. This is a strong promise.

Making it possible to be a local model

Since we made Choqok, which is a 1B model and can be run in pretty much any computer you can think of, we are in our way to make this model local as soon as possible.

Also we’ll be going to use this model on Mann-E‘s new content creation engine which we’ll be talking about very soon.

Conclusion

In the world we experience a new AI model pretty much in an hourly basis, as an AI company it is important to find new spaces to compete. J1 is our newest competition ground and we try to make it better and better in near future.

Also if you have any questions, you can freely message us at info@jabirproject.org. We also are open to investors and sponsors.

Regards

Muhammadreza Haghiri

Choqok, the smallest model we could make to run on your devices

Jabir project has been started in late 2011 by Muhammadreza Haghiri as an attempt to democratize computing. The essential product was a Linux distribution and had Free/Libre & Open Source Software (FLOSS) goals. The operating system project went dark in 2016 due to some problems but in 2024, Jabir Project came back.

The comeback was with the slogan Let’s build LLMs Together although our flagship model Jabir 400B isn’t open source yet, we’ve decided to make smaller and easier to use models open source and public for public use.

Now, we’re pleased to announce the release of Choqok, one billion parameters model which can be run on pretty much any GPU and device you can think of.

How to access Choqok

In order to access Choqok, you can use our OpenAI compatible API or you also can use our newly released ChatUI (more details later in this article).

But if you have 2GB of GPU VRAM (or quantizations which are available here which can be used on CPU with Ollama and 2+ GB of RAM) and a basic knowledge of Python code. you can pay a visit to our HuggingFace page and start using the model programmatically.

Known problems

  1. It still has problems in generating Persian/Arabic text. Although it understands the Persian grammar.
  2. It has some problems in understanding its own name which is a result of the base model (LLaMa 3.2) and synthetic data we’ve used. If you used DeepSeek v3, you may have faced the same problem of the model saying it is made by open AI.

Our Chat User Interface

For the sake of simplicity, in addition to our API (which is completely compatible with OpenAI’s architecture (tested with vercel’s ai-sdk and Python’s openai library), we also released a Chat User Interface which can be used to test our models.

Now you can open it, without signing up and any concerns of your data being seen by a 3rd party, use the models with ease. Just open ChatUI, choose your model and start prompting!

For Businesses

If you own a business and need a specific model in order to be tuned or trained or if you’re using our API’s more than usual, you may leave a message at info@jabirproject.org and tell us about your needs.

Also since Jabir Project is being considered a non-profit organization helping other AI projects and open source developers, we’re open to any type of investment or donation from our user base.

Conclusion

Jabir project is currently improving prompts on Atelier AI‘s image generations, Used to summarizes the transcription of the videos on YouTubeLM and also will be used in Image/Video generation on Mann-E and also will be backbones of a new platform named Rapidens. This means we are already on our production phase and now it is time to go more and more on device.

On device models are important, because they guarantee the digital freedoms of computer users and while most of the FLOSS community – specially in Iran – is completely passive, we’re ready to take serious actions about real open AI.