同步操作将从 Gitee 极速下载/gpt4free 强制同步,此操作会覆盖自 Fork 仓库以来所做的任何修改,且无法恢复!!!
确定后同步将在后台操作,完成时将刷新页面,请耐心等待。
By using this repository or any code related to it, you agree to the legal notice. The author is not responsible for any copies, forks, reuploads made by other users, or anything else related to gpt4free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
pip install -U g4f
pip install -U g4f
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
python3 -m venv venv
.\venv\Scripts\activate
source venv/bin/activate
requirements.txt
:pip install -r requirements.txt
test.py
file in the root folder and start using the repo, further Instructions are belowimport g4f
...
If you have Docker installed, you can easily set up and run the project without manually installing dependencies.
First, ensure you have both Docker and Docker Compose installed.
Clone the GitHub repo:
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
docker compose build
docker compose up
You server will now be running at http://localhost:1337
. You can interact with the API or run your tests as you would normally.
To stop the Docker containers, simply run:
docker compose down
Note: When using Docker, any changes you make to your local files will be reflected in the Docker container thanks to the volume mapping in the docker-compose.yml
file. If you add or remove dependencies, however, you'll need to rebuild the Docker image using docker compose build
.
g4f
Packageimport g4f
print(g4f.Provider.Ails.params) # supported args
# Automatic selection of provider
# streamed completion
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello world"}],
stream=True,
)
for message in response:
print(message, flush=True, end='')
# normal response
response = g4f.ChatCompletion.create(
model=g4f.models.gpt_4,
messages=[{"role": "user", "content": "hi"}],
) # alterative model setting
print(response)
# Set with provider
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
provider=g4f.Provider.DeepAi,
messages=[{"role": "user", "content": "Hello world"}],
stream=True,
)
for message in response:
print(message)
from g4f.Provider import (
AItianhu,
Acytoo,
Aichat,
Ails,
Aivvm,
Bard,
Bing,
ChatBase,
ChatgptAi,
ChatgptLogin,
CodeLinkAva,
DeepAi,
H2o,
HuggingChat,
Opchatgpts,
OpenAssistant,
OpenaiChat,
Raycast,
Theb,
Vercel,
Vitalentum,
Wewordle,
Ylokh,
You,
Yqcloud,
)
# Usage:
response = g4f.ChatCompletion.create(..., provider=ProviderName)
Cookies are essential for the proper functioning of some service providers. It is imperative to maintain an active session, typically achieved by logging into your account.
When running the g4f package locally, the package automatically retrieves cookies from your web browser using the get_cookies
function. However, if you're not running it locally, you'll need to provide the cookies manually by passing them as parameters using the cookies
parameter.
import g4f
from g4f.Provider import (
Bard,
Bing,
HuggingChat,
OpenAssistant,
OpenaiChat,
)
# Usage:
response = g4f.ChatCompletion.create(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
provider=Bard,
#cookies=g4f.get_cookies(".google.com"),
cookies={"cookie_name": "value", "cookie_name2": "value2"},
auth=True
)
To enhance speed and overall performance, execute providers asynchronously. The total execution time will be determined by the duration of the slowest provider's execution.
import g4f, asyncio
async def run_async():
_providers = [
g4f.Provider.AItianhu,
g4f.Provider.Acytoo,
g4f.Provider.Aichat,
g4f.Provider.Ails,
g4f.Provider.Aivvm,
g4f.Provider.ChatBase,
g4f.Provider.ChatgptAi,
g4f.Provider.ChatgptLogin,
g4f.Provider.CodeLinkAva,
g4f.Provider.DeepAi,
g4f.Provider.Opchatgpts,
g4f.Provider.Vercel,
g4f.Provider.Vitalentum,
g4f.Provider.Wewordle,
g4f.Provider.Ylokh,
g4f.Provider.You,
g4f.Provider.Yqcloud,
]
responses = [
provider.create_async(
model=g4f.models.default,
messages=[{"role": "user", "content": "Hello"}],
)
for provider in _providers
]
responses = await asyncio.gather(*responses)
for idx, provider in enumerate(_providers):
print(f"{provider.__name__}:", responses[idx])
asyncio.run(run_async())
get requirements:
pip install -r interference/requirements.txt
run server:
python3 -m interference.app
import openai
openai.api_key = ""
openai.api_base = "http://localhost:1337"
def main():
chat_completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "write a poem about a tree"}],
stream=True,
)
if isinstance(chat_completion, dict):
# not stream
print(chat_completion.choices[0].message.content)
else:
# stream
for token in chat_completion:
content = token["choices"][0]["delta"].get("content")
if content != None:
print(content, end="", flush=True)
if __name__ == "__main__":
main()
Website | Provider | gpt-3.5 | gpt-4 | Streaming | Asynchron | Status | Auth |
---|---|---|---|---|---|---|---|
www.aitianhu.com | g4f.provider.AItianhu | ✔️ | ❌ | ✔️ | ✔️ | ❌ | |
chat.acytoo.com | g4f.provider.Acytoo | ✔️ | ❌ | ✔️ | ✔️ | ❌ | |
chat-gpt.org | g4f.provider.Aichat | ✔️ | ❌ | ❌ | ✔️ | ❌ | |
ai.ls | g4f.provider.Ails | ✔️ | ❌ | ✔️ | ✔️ | ❌ | |
chat.aivvm.com | g4f.provider.Aivvm | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | |
bard.google.com | g4f.provider.Bard | ❌ | ❌ | ❌ | ✔️ | ✔️ | |
bing.com | g4f.provider.Bing | ❌ | ✔️ | ✔️ | ✔️ | ❌ | |
www.chatbase.co | g4f.provider.ChatBase | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | |
chatgpt.ai | g4f.provider.ChatgptAi | ✔️ | ❌ | ✔️ | ✔️ | ❌ | |
opchatgpts.net | g4f.provider.ChatgptLogin | ✔️ | ❌ | ❌ | ✔️ | ❌ | |
ava-ai-ef611.web.app | g4f.provider.CodeLinkAva | ✔️ | ❌ | ✔️ | ✔️ | ❌ | |
deepai.org | g4f.provider.DeepAi | ✔️ | ❌ | ✔️ | ✔️ | ❌ | |
gpt-gm.h2o.ai | g4f.provider.H2o | ❌ | ❌ | ✔️ | ✔️ | ❌ | |
huggingface.co | g4f.provider.HuggingChat | ❌ | ❌ | ✔️ | ✔️ | ✔️ | |
opchatgpts.net | g4f.provider.Opchatgpts | ✔️ | ❌ | ❌ | ✔️ | ❌ | |
open-assistant.io | g4f.provider.OpenAssistant | ❌ | ❌ | ✔️ | ✔️ | ✔️ | |
chat.openai.com | g4f.provider.OpenaiChat | ✔️ | ❌ | ❌ | ✔️ | ✔️ | |
raycast.com | g4f.provider.Raycast | ✔️ | ✔️ | ✔️ | ❌ | ✔️ | |
theb.ai | g4f.provider.Theb | ✔️ | ❌ | ✔️ | ❌ | ✔️ | |
sdk.vercel.ai | g4f.provider.Vercel | ✔️ | ❌ | ❌ | ✔️ | ❌ | |
app.vitalentum.io | g4f.provider.Vitalentum | ✔️ | ❌ | ✔️ | ✔️ | ❌ | |
wewordle.org | g4f.provider.Wewordle | ✔️ | ❌ | ❌ | ✔️ | ❌ | |
chat.ylokh.xyz | g4f.provider.Ylokh | ✔️ | ❌ | ✔️ | ✔️ | ❌ | |
you.com | g4f.provider.You | ✔️ | ❌ | ❌ | ✔️ | ❌ | |
chat9.yqcloud.top | g4f.provider.Yqcloud | ✔️ | ❌ | ✔️ | ✔️ | ❌ | |
aiservice.vercel.app | g4f.provider.AiService | ✔️ | ❌ | ❌ | ❌ | ❌ | |
chat.dfehub.com | g4f.provider.DfeHub | ✔️ | ❌ | ✔️ | ❌ | ❌ | |
free.easychat.work | g4f.provider.EasyChat | ✔️ | ❌ | ✔️ | ❌ | ❌ | |
next.eqing.tech | g4f.provider.Equing | ✔️ | ❌ | ✔️ | ❌ | ❌ | |
chat9.fastgpt.me | g4f.provider.FastGpt | ✔️ | ❌ | ✔️ | ❌ | ❌ | |
forefront.com | g4f.provider.Forefront | ✔️ | ❌ | ✔️ | ❌ | ❌ | |
chat.getgpt.world | g4f.provider.GetGpt | ✔️ | ❌ | ✔️ | ❌ | ❌ | |
liaobots.com | g4f.provider.Liaobots | ✔️ | ✔️ | ✔️ | ✔️ | ❌ | |
supertest.lockchat.app | g4f.provider.Lockchat | ✔️ | ✔️ | ✔️ | ❌ | ❌ | |
p5.v50.ltd | g4f.provider.V50 | ✔️ | ❌ | ❌ | ❌ | ❌ | |
chat.wuguokai.xyz | g4f.provider.Wuguokai | ✔️ | ❌ | ❌ | ❌ | ❌ |
Model | Base Provider | Provider | Website |
---|---|---|---|
palm | g4f.Provider.Bard | bard.google.com | |
h2ogpt-gm-oasst1-en-2048-falcon-7b-v3 | Huggingface | g4f.Provider.H2o | www.h2o.ai |
h2ogpt-gm-oasst1-en-2048-falcon-40b-v1 | Huggingface | g4f.Provider.H2o | www.h2o.ai |
h2ogpt-gm-oasst1-en-2048-open-llama-13b | Huggingface | g4f.Provider.H2o | www.h2o.ai |
claude-instant-v1 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
claude-v1 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
claude-v2 | Anthropic | g4f.Provider.Vercel | sdk.vercel.ai |
command-light-nightly | Cohere | g4f.Provider.Vercel | sdk.vercel.ai |
command-nightly | Cohere | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-neox-20b | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
oasst-sft-1-pythia-12b | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
oasst-sft-4-pythia-12b-epoch-3.5 | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
santacoder | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
bloom | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
flan-t5-xxl | Huggingface | g4f.Provider.Vercel | sdk.vercel.ai |
code-davinci-002 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-3.5-turbo-16k | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-3.5-turbo-16k-0613 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
gpt-4-0613 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-ada-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-babbage-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-curie-001 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-davinci-002 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
text-davinci-003 | OpenAI | g4f.Provider.Vercel | sdk.vercel.ai |
llama13b-v2-chat | Replicate | g4f.Provider.Vercel | sdk.vercel.ai |
llama7b-v2-chat | Replicate | g4f.Provider.Vercel | sdk.vercel.ai |
🎁 Projects | ⭐ Stars | 📚 Forks | 🛎 Issues | 📬 Pull requests |
gpt4free | ||||
gpt4free-ts | ||||
ChatGPT-Clone | ||||
ChatGpt Discord Bot | ||||
LangChain gpt4free | ||||
ChatGpt Telegram Bot | ||||
Action Translate Readme | ||||
Langchain Document GPT |
to add another provider, its very simple:
from .base_provider import BaseProvider
from ..typing import CreateResult, Any
class HogeService(BaseProvider):
url = "http://hoge.com"
working = True
supports_gpt_35_turbo = True
@staticmethod
def create_completion(
model: str,
messages: list[dict[str, str]],
stream: bool,
**kwargs: Any,
) -> CreateResult:
pass
working
to True
...create_completion
and yield
the response, even if its a one-time response, do not hesitate to look at other providers for inspirationfrom .base_provider import BaseProvider
from .HogeService import HogeService
__all__ = [
HogeService,
]
import g4f
response = g4f.ChatCompletion.create(model='gpt-3.5-turbo', provider=g4f.Provider.PROVIDERNAME,
messages=[{"role": "user", "content": "test"}], stream=g4f.Provider.PROVIDERNAME.supports_stream)
for message in response:
print(message, flush=True, end='')
We are currently implementing new features and trying to scale it, but please be patient as it may be unstable. https://chat.g4f.ai/chat This site was developed by me and includes gpt-4/3.5, internet access and gpt-jailbreak's like DAN
Run locally here: https://github.com/xtekky/chatgpt-clone.
This program is licensed under the GNU GPL v3
xtekky/gpt4free: Copyright (C) 2023 xtekky
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。