Introduction to OpenAI Assistants V2
Now, the OpenAI Assistant is a customizable AI tool that developers can integrate into their application to create powerful, interactive AI assistants.
It leverages advanced models and is enhanced with specialised capabilities including conversational
history management, tools like code interpretation, file search, and function calling.

So function calling, let's start with the function calling. This feature allows the assistant to call custom functions or interact with external APIs.
Code interpreter is something which enables the assistant to write and execute Python
code in a controlled and sandboxed environment.
It's ideal for performing calculations, generating data, visualizations, or even automating the
complex data processing tasks.
Doing the retrieval which is also known as file search.
This feature allows developers to upload files that the assistant can reference to enhance
its responses. we will be going through the updated version, version v2.
So this is the chart for OpenAI Assistant v2.
This feature allows developers to upload files that the assistant can reference to enhance
its responses. we will be going through the updated version, version v2.
So this is the chart for OpenAI Assistant v2.
We start with uploading a file into the OpenAI.
Once the file is created, a file ID is generated.
Now we then create a vector store and pass that value of file to this vector store so
that they link the file ID to the vector store.
Next step is to create assistant.
So assistant will have information like assistant name, assistant instructions, assistant tools
like whether we want to use it for code interpretation, whether we want to use it for file search
or function.
And fifth one is the assistant tool resource.
In file search, we will be passing the vector store ID inside this assistant tool resource.
Now with all this information, assistant will be created.
So next is thread.
We create a thread and attach all the user messages to the thread.
And then we run this thread with the help of the assistant we created.
I will be giving detailed examples of this later in the video.
So for now, just understand that this thread will contain all the messages that user sends
to the assistant.
And assistant in response will add its messages to this thread as well.
Okay, now let's see each of these steps in details.
All the example are shown in curl.
You can use these curls to convert it to any language.
First we have the file upload.
Once the file is created, a file ID is generated.
Now we then create a vector store and pass that value of file to this vector store so
that they link the file ID to the vector store.
Next step is to create assistant.
So assistant will have information like assistant name, assistant instructions, assistant tools
like whether we want to use it for code interpretation, whether we want to use it for file search
or function.
And fifth one is the assistant tool resource.
In file search, we will be passing the vector store ID inside this assistant tool resource.
Now with all this information, assistant will be created.
So next is thread.
We create a thread and attach all the user messages to the thread.
And then we run this thread with the help of the assistant we created.
I will be giving detailed examples of this later in the video.
So for now, just understand that this thread will contain all the messages that user sends
to the assistant.
And assistant in response will add its messages to this thread as well.
Okay, now let's see each of these steps in details.
All the example are shown in curl.
You can use these curls to convert it to any language.
First we have the file upload.
We need to hit the end point with the purpose parameter assistant.
If we were uploading the file for fine tuning, we will be using fine tune as the purpose
here.
But as we are using it for AI assistant, so that's why we are using assistant in the file
upload.
And this API hit will result in creation of the file ID that you can see in this response.
If we were uploading the file for fine tuning, we will be using fine tune as the purpose
here.
But as we are using it for AI assistant, so that's why we are using assistant in the file
upload.
And this API hit will result in creation of the file ID that you can see in this response.
Now our file is uploaded.
The next thing we need to create is the vector store.
Now for creating vector store, we need to hit this end point.
The next thing we need to create is the vector store.
Now for creating vector store, we need to hit this end point.
And once we hit this end point, a vector store will be created and in the response will have
the vector store ID.
the vector store ID.
Now before we go any further, it's important for you to understand some basics of embeddings
and vectors.
Now embeddings are a numerical representation of text.
And the vector is a mathematical point that represent data in a format that AI algorithms
can understand.
So let's take an example.
We have a text string.
We convert this text string into vector using embedding models.
And once these vectors are created, we store them in vector stores.
Now the reason for storing these vector in vector store is because these vector stores
are storage systems that are optimized for storing and retrieving vector embeddings.
So in case of OpenAI, when we add our files directly into vector stores, these vector
stores automatically parse chunks, embeds, and store files into the vector database which
is capable of both keyword and semantic search.
and vectors.
Now embeddings are a numerical representation of text.
And the vector is a mathematical point that represent data in a format that AI algorithms
can understand.
So let's take an example.
We have a text string.
We convert this text string into vector using embedding models.
And once these vectors are created, we store them in vector stores.
Now the reason for storing these vector in vector store is because these vector stores
are storage systems that are optimized for storing and retrieving vector embeddings.
So in case of OpenAI, when we add our files directly into vector stores, these vector
stores automatically parse chunks, embeds, and store files into the vector database which
is capable of both keyword and semantic search.
Now we have our vector store created.
Next thing we need to do is link them together.
So we take our vector store ID and file ID like shown in this curl API hit.
Once we do this, we link them together by vector store files.
Now we have our vector store.
Now the next thing we need to create is the assistant.
Now for assistant, we need this endpoint.
Now the next thing we need to create is the assistant.
Now for assistant, we need this endpoint.
And if you see in this request, we are passing name of the assistant, the model which we
are going to use, the tool which would be file search in our case.
And if you see in the tool resource, we are mentioning the vector store IDs.
So this is all the information we need to give to create the assistant.
So once the assistant is created, we have our assistant ID and the response.
Now if we move further, the next thing we need to do is we need to create a fresh thread.
Now imagine thread as a thread where all the messages will be linked.
Now we use a simple endpoint.
We don't pass any information because thread is rather independent of other objects that
we created.
Now once you create a thread, a thread ID will be created.
Now imagine this thread ABC123 got created now.
The next thing we need is to have the messages.
So these are the messages which will be searched in your files using the AI assistant.
So we need to create a message and thread.
What here we are doing is we are combining the messages and thread ID.
Now if you see, we have passed a user question which is how does AI work?
Explain in simple words.
So this is our message and in the URL, if you can see, the thread ABC123 is used in
the endpoint.
So we have both the information and when we hit this API, we'll get a response that this
message has been linked to the thread ID.
So after this, if you retrieve your thread, you'll see that there is a message on the
thread which says how does AI work?
Explain in simple words.
And the role for this message would be user role because we are the one who's asking this
question on the thread.
Now once we do this, the next thing we need to do is run.
So we have our assistant ID, we have our thread ID, what we are doing is we are telling our assistant to run this thread, when we run this thread, we pass the thread ID on the URL and assistant ID as a request
body, once we run this thread, the OpenAI assistant will add its response to the same thread.
In order to see what response has been added, we need to retrieve the thread.
So once we retrieve the thread, we'll see that the thread has one more message which
will be from the assistant.
Comments
Post a Comment