https://marker-inc-korea.github.io/AutoRAG/data_creation/tutorial.html#use-multiple-prompts
예전에 여기에 개발자분이 올려주신 깃 보고 doc 보면서
원본 데이터 -> 코퍼스 데이터 생성 완료.
코퍼스 데이터 -> qa 데이터 생성 중에 있는데,
Use multiple prompts 방식으로 생성하였고
import pandas as pd from autorag.data.qacreation import generate_qa_llama_index_by_ratio, make_single_content_qa from llama_index.llms.openai import OpenAI import os import pickle from pprint import pprint from datetime import datetime from random import randint start = datetime.now() start_rnd = int(start.strftime("%Y%m%d%H%M%S")) start_str = start.strftime("%Y%m%d_%H%M%S") ratio_dict = { './prompts/dataa_prompt.txt': 3, './prompts/datab_prompt.txt': 2, './prompts/datac_prompt.txt': 2 } corpus_df = pd.read_parquet('./corpus/corpus.parquet') OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY") with open(f"./prompts/sys_prom.txt", "r") as fin: sys_prom = fin.read() llm = OpenAI(model='gpt-3.5-turbo-0125', temperature=0.5, system_prompt=sys_prom, api_key=OPENAI_API_KEY, max_tokens=1000, max_retries=3, ) rnd = randint(0, int(start_rnd)) % (2**32 - 1) qa_df = make_single_content_qa(corpus_df, content_size=50, qa_creation_func=generate_qa_llama_index_by_ratio, llm=llm, prompts_ratio=ratio_dict, question_num_per_content=2, random_state=rnd, batch=6, output_filepath=f'./qa/qa_{start_str}.parquet')
처럼 QA 데이터를 만들려고 했습니다. 그러니까 content당 질문을 2개 혹은 그 이상으로 만들려고 했는데,
2개로 시도해 보니 question_num_per_content 값이 document에 나와 있는 것처럼 오류 메시지가 아래처럼 나오면서
make_single_content_qa() 함수 동작이 정상적으로 되지 않았습니다.
doc상에서는 아래처럼 content당 1개의 question으로 사용하더라고요.
qa_df = make_single_content_qa(corpus_df, content_size=50, qa_creation_func=generate_qa_llama_index_by_ratio, llm=llm, prompts_ratio=ratio_dict, question_num_per_content=1, batch=6)
1로 만들어서 QA 데이터를 만드는 거 수행하기는 했는데, 이거 content별로 질문을 꼭 1개씩만 만들어도 되나 싶기도 하고...
다른 분들은 이런 문제 없으셨을까요?
python 3.11 버전에 git 저장소 버전은 아래 버전입니다. (오늘 업데이트하고 오늘 실행해서 head 버전일것 같네요)
https://github.com/Marker-Inc-Korea/AutoRAG
commit dd7c8ba5220d484463eb8e208aabec6c76f58ccb (HEAD -> main, origin/main, origin/HEAD)
Date: Wed May 1 20:36:55 2024 +0900
<오류 메시지>
import sys; print('Python %s on %s' % (sys.version, sys.platform))
"~\AutoRAG\Scripts\python.exe" -X pycache_prefix=C:\Users\USER\AppData\Local\JetBrains\PyCharmCE2023.2\cpython-cache "C:/Program Files/JetBrains/PyCharm Community Edition 2022.1.4/plugins/python-ce/helpers/pydev/pydevd.py" --multiprocess --qt-support=auto --client 127.0.0.1 --port 11765 --file "~\AutoRAG\make_qa.py"
Connected to pydev debugger (build 232.10227.11)
~\AutoRAG\Lib\site-packages\swifter\swifter.py:87: UserWarning: This pandas object has duplicate indices, and swifter may not be able to improve performance. Consider resetting the indices with `df.reset_index(drop=True)`.
warnings.warn(
~\AutoRAG\Lib\site-packages\swifter\swifter.py:87: UserWarning: This pandas object has duplicate indices, and swifter may not be able to improve performance. Consider resetting the indices with `df.reset_index(drop=True)`.
warnings.warn(
[05/05/24 12:03:19] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.865943
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.756721
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.905366
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.861990
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.792649
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.845370
seconds
[05/05/24 12:03:20] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.569147
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.711324
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.647714
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.977088
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.886051
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.855306
seconds
[05/05/24 12:03:22] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.920208
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.767209
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.898999
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.722026
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.364316
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.635168
seconds
[05/05/24 12:03:37] WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.37018096711688264 seconds as
it raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.2095070307714877 seconds as
it raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.26697782204911336 seconds as
it raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.936654587712494 seconds as it
raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.6480353852465935 seconds as
it raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.6091310056669882 seconds as
it raised APIConnectionError:
Connection error..
[05/05/24 12:03:38] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.957215
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.817718
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.959149
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.905136
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.752619
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.840000
seconds
[05/05/24 12:03:39] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.721525
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.657693
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.578574
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.612000
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.885476
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.983950
seconds
[05/05/24 12:03:40] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.684547
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.732259
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.789017
seconds
[05/05/24 12:03:41] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.057090
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.123632
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.685322
seconds
[05/05/24 12:03:44] WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in 1.31087733058976
seconds as it raised
APIConnectionError: Connection
error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.7912638021213285 seconds as
it raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
1.829095179481087 seconds as it
raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.9177037051747976 seconds as
it raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.5297603329961049 seconds as
it raised APIConnectionError:
Connection error..
[05/05/24 12:03:45] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.938343
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.859658
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.934315
seconds
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
1.169171980447081 seconds as it
raised APIConnectionError:
Connection error..
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.775544
seconds
[05/05/24 12:03:46] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.800300
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.890340
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.501231
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.745237
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.977273
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.988221
seconds
[05/05/24 12:03:47] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.945175
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.686277
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.207921
seconds
[05/05/24 12:03:48] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.577840
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.936472
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.618381
seconds
[05/05/24 12:03:49] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.003879
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.470886
seconds
[05/05/24 12:03:51] WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
3.8843135104544726 seconds as
it raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
3.4431188089379923 seconds as
it raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.04592408777127854 seconds as
it raised APIConnectionError:
Connection error..
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.819820
seconds
[05/05/24 12:03:52] WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
2.726841476106299 seconds as it
raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
2.1478813216351806 seconds as
it raised APIConnectionError:
Connection error..
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.866587
seconds
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
2.5638471943192322 seconds as
it raised APIConnectionError:
Connection error..
[05/05/24 12:04:01] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.972112
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.565235
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.886569
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.761546
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.781037
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.934153
seconds
[05/05/24 12:04:02] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.749707
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.910674
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.543686
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.564741
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.850778
seconds
[05/05/24 12:04:03] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.361051
seconds
[05/05/24 12:04:04] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.391030
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.847161
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.237489
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.460621
seconds
[05/05/24 12:04:05] WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
6.229011829044466 seconds as it
raised APIConnectionError:
Connection error..
[05/05/24 12:04:07] WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in 4.24282937756142
seconds as it raised
APIConnectionError: Connection
error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.0045751690235480424 seconds
as it raised
APIConnectionError: Connection
error..
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.918961
seconds
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
0.15581393908665842 seconds as
it raised APIConnectionError:
Connection error..
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.767725
seconds
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
7.029775022585474 seconds as it
raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
6.6533242348894355 seconds as
it raised APIConnectionError:
Connection error..
[05/05/24 12:04:08] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.846243
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.971037
seconds
[05/05/24 12:04:10] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.121990
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.053051
seconds
[05/05/24 12:04:11] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.978587
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.878502
seconds
[05/05/24 12:04:12] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.965394
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.619699
seconds
[05/05/24 12:04:13] WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
12.253350868911804 seconds as
it raised APIConnectionError:
Connection error..
WARNING [before_sleep.py:65] >> before_sleep.py:65
Retrying
llama_index.llms.openai.base.Op
enAI._achat in
2.0542634319962048 seconds as
it raised APIConnectionError:
Connection error..
[05/05/24 12:04:14] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.524718
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.450196
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.933736
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.781892
seconds
[05/05/24 12:04:15] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.788431
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.947050
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.730352
seconds
[05/05/24 12:04:16] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 1.635034
seconds
[05/05/24 12:04:29] INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.798849
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.688284
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 3.004851
seconds
INFO [_base_client.py:1603] >> _base_client.py:1603
Retrying request to
/chat/completions in 0.837530
seconds
Traceback (most recent call last):
File "~\AutoRAG\Lib\site-packages\httpx\_transports\default.py", line 69, in map_httpcore_exceptions
yield
File "~\AutoRAG\Lib\site-packages\httpx\_transports\default.py", line 373, in handle_async_request
resp = await self._pool.handle_async_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\httpcore\_async\connection_pool.py", line 216, in handle_async_request
raise exc from None
File "~\AutoRAG\Lib\site-packages\httpcore\_async\connection_pool.py", line 196, in handle_async_request
response = await connection.handle_async_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\httpcore\_async\connection.py", line 99, in handle_async_request
raise exc
File "~\AutoRAG\Lib\site-packages\httpcore\_async\connection.py", line 76, in handle_async_request
stream = await self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\httpcore\_async\connection.py", line 122, in _connect
stream = await self._network_backend.connect_tcp(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\httpcore\_backends\anyio.py", line 114, in connect_tcp
with map_exceptions(exc_map):
File "C:\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "~\AutoRAG\Lib\site-packages\httpcore\_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 11001] getaddrinfo failed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "~\AutoRAG\Lib\site-packages\openai\_base_client.py", line 1514, in _request
response = await self._client.send(
^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\httpx\_client.py", line 1689, in _send_handling_auth
response = await self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\httpx\_client.py", line 1763, in _send_single_request
response = await transport.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\httpx\_transports\default.py", line 372, in handle_async_request
with map_httpcore_exceptions():
File "C:\Python311\Lib\contextlib.py", line 155, in __exit__
self.gen.throw(typ, value, traceback)
File "~\AutoRAG\Lib\site-packages\httpx\_transports\default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 11001] getaddrinfo failed
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "~\0042_GITHUB\0004_AutoRAG\AutoRAG\autorag\utils\util.py", line 267, in process_batch
batch_results = await asyncio.gather(*batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\0042_GITHUB\0004_AutoRAG\AutoRAG\autorag\data\qacreation\llama_index.py", line 147, in async_qa_gen_llama_index
return await generate(content, llm)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\0042_GITHUB\0004_AutoRAG\AutoRAG\autorag\data\qacreation\llama_index.py", line 140, in generate
output = await llm.acomplete(
^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\llama_index\core\llms\callbacks.py", line 257, in wrapped_async_llm_predict
f_return_val = await f(_self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\llama_index\llms\openai\base.py", line 600, in acomplete
return await acomplete_fn(prompt, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\llama_index\core\base\llms\generic_utils.py", line 221, in wrapper
chat_response = await func(messages, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\tenacity\_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\tenacity\_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\tenacity\__init__.py", line 325, in iter
raise retry_exc.reraise()
^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\tenacity\__init__.py", line 158, in reraise
raise self.last_attempt.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "~\AutoRAG\Lib\site-packages\tenacity\_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\llama_index\llms\openai\base.py", line 620, in _achat
response = await aclient.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\openai\resources\chat\completions.py", line 1161, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\openai\_base_client.py", line 1782, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\openai\_base_client.py", line 1485, in request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\openai\_base_client.py", line 1538, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\openai\_base_client.py", line 1538, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\openai\_base_client.py", line 1538, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~\AutoRAG\Lib\site-packages\openai\_base_client.py", line 1548, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
python-BaseException