For using Jabir Project’s API’s you must be familiar with OpenAI‘s API specifications. In this particular page, you will learn how to use this API with both cURL and Python.
API Base URL
The API is available at https://openai.jabirproject.org
Available models
- Jabir 400B : jabir-400b
- Jabir 400B Online: jabir-400b-online
- Jabir Evil: jabir-evil
- Choqok: 1 billion parameters model designed to run “on device”.
API Endpoints
GET /v1/models
This endpoint gives you a list of available models from our API. You can determine which model you’re going to use.
POST /v1/embeddings
If you want to develop RAG apps using Jabir models, you can use our embeddings as well. This is very similar to OpenAI’s large embedding model and will make you able to embed text in multiple languages.
Parameters
- model (required): the name of the embedding model. The only supported name is jabir-embedding
- input (required): Inputs in form of a string or an array.
This endpoint works with OpenAI’s python library as well.
POST /v1/chat/completions
Since Jabir models are all designed to be chat models, we currently only support chat completion style. In the future, we may add completions as well.
Parameters
- model (required): the name of the model of your choice (available models are introduced above)
- messages (required): an array of messages (see curl and python examples to understand better)
- temperature: determines the randomness of the output of the models
- max_tokens: maximum number of tokens you expect the model to generate.
Example codes
cURL
curl --location --request POST 'https://openai.jabirproject.org/v1/chat/completions' \ --header 'Content-Type: application/json' \ --data-raw '{ "model" : "jabir-400b-online", "messages" : [ { "role" : "user", "content" : "introduce yourself" } ] }'
You also can modify it like this:
curl --location --request POST 'https://openai.jabirproject.org/v1/chat/completions' \ --header 'Content-Type: application/json' \ --data-raw '{ "model" : "jabir-400b-online", "messages" : [ { "role" : "system", "content" : "You are a helpful assistant." }, { "role" : "user", "content" : "introduce yourself" } ] }'
The output then will be:
{ "choices": [ { "message": { "content": "I am Jabir 400B Online, a large language model designed to assist you with a wide variety of topics by providing information and insights from the web. My purpose is to help you find the information you need, offer useful resources, and present everything in a clear and organized manner. Feel free to ask me anything you'd like to know!", "role": "assistant" } } ], "created": 1735587852.545231, "id": "2e0a2fee-2a98-444f-8220-76705ebf2d67", "model": "jabir-400b-online", "object": "chat.completion" }
and you can parse this output in the way you need.
Python
For using our API in python, you need to first install OpenAI’s library:
pip3 install openai
Then, you need to initialize a client:
client = OpenAI(api_key="FAKE", base_url="https://openai.jabirproject.org/v1")
Since OpenAI’s python library needs an API key and our API doesn’t require a key, we just pass FAKE as our API key.
Next, we can make a request to our desired model:
completion = client.chat.completions.create( model = "jabir-400b", messages = [ { "role": "user", "content": "who are you?" } ] )
Then, you can access what assistant said:
print(completion.choices[0].message.content)
All codes together may seem like this:
from openai import OpenAI client = OpenAI(api_key="FAKE", base_url="https://openai.jabirproject.org/v1") completion = client.chat.completions.create( model = "jabir-400b", messages = [ { "role": "user", "content": "who are you?" } ] ) print(completion.choices[0].message.content)
And you also can use our OpenAI compatible API with LangChain and other RAG frameworks as well.