Free Autogpt: A Powerful AI Agent Without Paid APIs with Streamlit

Hello everyone :smiling_face_with_three_hearts: ,

I wanted to start by talking about how important it is to democratize AI. Unfortunately, most new applications or discoveries in this field end up enriching some big companies, leaving behind small businesses or simple projects. One striking example of this is Autogpt, an autonomous AI agent capable of performing tasks.

Autogpt and similar projects like BabyAGI only work with paid APIs, which is not fair. That’s why I tried to recreate a simpler but very interesting and, above all, open-source version of Autogpt that does not require any API and does not need any particular hardware.

I believe that by providing free and open-source AI tools, we can give small businesses and individuals the opportunity to create new and innovative projects without the need for significant financial investment. This will allow for more equitable and diverse access to AI technology, which is essential for advancing society as a whole.



First, I searched everywhere for easily accessible and free websites or endpoints to use. Eventually, I came across this simple library: T3nsor. This library allows us to use GPT3.5 APIs completely for free.

Here’s an example of how you can create a simple chatbot using this library in your local environment:

import t3nsor
messages = []

while True:
    user = input('you: ')

    t3nsor_cmpl = t3nsor.Completion.create(
        prompt   = user,
        messages = messages

    print('gpt:', t3nsor_cmpl.completion.choices[0].text)
        {'role': 'user', 'content': user }, 
        {'role': 'assistant', 'content': t3nsor_cmpl.completion.choices[0].text}

After finding this free endpoint, I had to create a custom wrapper for my LLM using Langchain. This is because Langchain mostly offers LLM models that are only available for a fee. However, we were able to create a custom component based on the endpoint.

this is the code for the custome LLM wrapper :

from langchain.llms.base import LLM
from typing import Optional, List, Mapping, Any
import t3nsor

class gpt3NoInternet(LLM):
    messages: List[Mapping[str, Any]]
    def _llm_type(self) -> str:
        return "custom"
    def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
        if stop is not None:
            raise ValueError("stop kwargs are not permitted.")
        t3nsor_cmpl = t3nsor.Completion.create(
            prompt   = prompt,
            messages = self.messages

        response = t3nsor_cmpl.completion.choices[0].text
        self.messages.append({'role': 'user', 'content': prompt})
        self.messages.append({'role': 'assistant', 'content': response})
        return response
    def _identifying_params(self) -> Mapping[str, Any]:
        """Get the identifying parameters."""
        return {"messages": self.messages}

#llm = gpt3NoInternet(messages=[])

#print(llm("Never forget you are a Python Programmer and I am a Stock Trader."))

After creating our custom wrapper for the LLM, I stumbled upon a fascinating project called CAMEL. This project shares many similarities with Autogpt, and it can be easily integrated with our custom LLM. CAMEL aims to develop scalable techniques that enable autonomous cooperation among communicative agents while also providing insights into their “cognitive” processes.

CAMEL’s unique approach focuses on creating agents that can understand and reason about natural language, as well as use it to interact with other agents. This type of agent could have a significant impact on many areas, including customer service, virtual assistants, and even education.

And finally I put it all together using streamlit and streamlit_chat_media




That sounds like an awesome app, @alessandro_ciciarell! :raised_hands:

I will certainly try it.

May I ask if this is deployed anywhere so people can try it straight away?



I am very pleased that this code has piqued your interest. :smiling_face_with_three_hearts: :smiling_face_with_three_hearts:

At the moment I’m developing it to make it usable with just a few clicks.
This is an example of local running app with a very low resource computer.

Right now it’s very easy to start playing with and help me improve the code…


Wow, really compliment you on your idea and desire to democratize AI. I’ve tried it a few times and I have to say that with some tweaks this project has so much potential. I leave you here a very interesting usage log, but in the end it went wrong.

The most fascinating thing is the level of communication held by the two AIs. Congratulations again.


I got an error. No found. That seems like the API replacement, but it wasn’t in the repo. I’m not good at any of this so maybe I missed something obvious.

2023-04-19 **.my.ip* Uncaught app exception
Traceback (most recent call last):
  File "C:\Users\UserName\AppData\Local\Programs\Python\Python311\Lib\site-packages\streamlit\runtime\scriptrunner\", line 565, in _run_script
    exec(code, module.__dict__)
  File "D:\Program Files\AutoGPT\", line 17, in <module>
    from phindAPI import phindGPT4Internet
  File "D:\Program Files\AutoGPT\", line 3, in <module>
    import phind
ModuleNotFoundError: No module named 'phind'
1 Like

@beta_tester , thanks :star_struck: now i managed to add gpt 4 , you shouldn’t have this error anymore.

Udate REPO

@Ember_Tilton I’m very sorry, you probably downloaded it while I was doing the last few commits. Now everything is functional and easily executable. I added the ability to use GPT4 connected to the internet.

:hugs: While the virtual agents are talking you will be able to see that links are being sent.

Please share your opinions, feedback or ideas to improve the project :partying_face: :partying_face:

1 Like

I tried your app and it’s awesome and scary. The level of conversation is impressive.

But i found a problem, the agents don’t stop even if they get to the solution of the task.

1 Like

@Barber_Fusion thanks :smiling_face_with_three_hearts:, at the moment I’m working to implement a third agent that assists the conversation and blocks it when it “believes” that they have reached the solution of the task :face_with_peeking_eye:

This is very clever, but are these “free” apis legal? Doesn’t look like it, in which case they won’t last long.

1 Like

This looks absolutely awesome
im new to this so i struggled a bit with the install(so maybe i did something wrong )
But if i run it withe the defaults in even it just gives me
AI Assistant (Python Programmer):

AI User (Stock Trader):

all the time your help will be greatly appreciated

1 Like

SSLError: HTTPSConnectionPool(host=‘’, port=443): Max retries exceeded with url: /api/chat (Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:992)’)))

1 Like

@Harshit_Joshi , @Thys_Swanepoel , @fredzannarbor , @Charly_Wargnier , @beta_tester , @Ember_Tilton , @Barber_Fusion

Sorry but the previous solution had some bugs, now everything works even better and in a simpler way. Please see the updated repository.

:bomb: Now i have add AGUTOGPT version and work so WELL :stuck_out_tongue_winking_eye:

Let me know if you have any problems or bugs :saluting_face:

Thanks for updating us, but I do not think this is a great idea. Bound to get shut down. Why not spend all this good energy on one of the Llama/Alpaca variants?

1 Like

@fredzannarbor I honestly don’t think our project is doomed to be closed, we are doing what we can to make AI accessible to everyone, without needing to depend on companies like OpenAI.

We tried and try daily open source models like Llama/Alpaca/Vicuna/GPT4ALL/OpenAssistan but the 7B versions are still toys, and the versions with more parameters require hardware that not everyone can afford.

We also managed to create a Notebook on Google Colab to test all the agents created to date, also able to run streamlit on Colab without any problems :slight_smile:

RUN NOW ON COLAB :open_mouth::point_right: Open in Colab

HELP US TO :hugs: Democratize AI :hugs:

I must be misunderstanding something. Why would we need a ChatGPT token for something that doesn’t depend on OpenAI?

1 Like

@Goyo It’s very simple . Currently if you want to develop applications based on LLM models (unless you don’t have the money or suitable hardware) you are forced to pay API calls to OpenAI, Choere or other companies.

We applied a reverse engineering technique and managed to use the ChatGPT endpoints, without having to pay anything to OpenAI.

The repository depend on free Chatgpt version created by OpenAl , but not depend from paid API calls or does not require special hardware.

But if you have the right hardware ( Nvida graphics cards , Ram > 30gb ) or you don’t mind paying the API calls , this repository probably isn’t for you .

Have you tried the repository?

Not yet. I am basically ignorant and trying to understand.

I guess those free APIs are subjected to change without warning and users can be throttled or even banned for any reason, specifically for abnormal usage. Which make this approach quite fragile, but in no way illegal. As much, users calling the API in ways forbidden by the TOS would get their token revoked.


@Goyo Of course OpenAI won’t send us an email with the new endpoint if it changes, but as long as ChatGPT remains a free service they will always have to expose an endpoint.

At the moment our implementation respects all the limits of OpenAI. The limits are maximum 50 messages per hour on ChatGPT, after 50 messages he replies with “too many messages in one hour”.

In our solution we count the calls and when it reaches 45 calls we raise an error message where we notify the user that he has finished his Calls. And that he has to try again in an hour.

As far as being banned seems to me an exaggeration , since between one call and another there is a time delay made up of ( an incremental variable in seconds + the chatGpT response time ) and the request is simulated so as to appear to have been made by the site Chat gpt.

Please check the code or read the documentation on the repository before saying things that are not really true.

Hi could you help me. I want run your application but the instruction in your readme doesn’t work. Could you update or share what we need to do please only the instruction please it would be verry nice ? I use w11…
Many thanks for your work