How to deal with the time out in retrieving data from MP molecules

Hi, I tried to retrieve data from MP molecules with the following code:

from mp_api.client import MPRester
with MPRester(“api_key”,monty_decode=False, use_document_model=False) as mpr:
docs=mpr.molecules.summary.search(fields=[‘redox’,‘molecules’,‘molecule_id’],charge=0,all_fields=False)

and I got the following exceptions:

File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\urllib3\connectionpool.py”, line 534, in _make_request
response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\urllib3\connection.py”, line 516, in getresponse
httplib_response = super().getresponse()
^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\http\client.py”, line 1430, in getresponse
response.begin()
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\http\client.py”, line 331, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\http\client.py”, line 292, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), “iso-8859-1”)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\socket.py”, line 720, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\ssl.py”, line 1251, in recv_into
return self.read(nbytes, buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\ssl.py”, line 1103, in read
return self._sslobj.read(len, buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TimeoutError: The read operation timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\urllib3\connectionpool.py”, line 787, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\urllib3\connectionpool.py”, line 536, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\urllib3\connectionpool.py”, line 367, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host=‘api.materialsproject.org’, port=443): Read timed out. (read timeout=20)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\requests\adapters.py”, line 667, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\urllib3\connectionpool.py”, line 871, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\urllib3\connectionpool.py”, line 871, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\urllib3\connectionpool.py”, line 871, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\urllib3\connectionpool.py”, line 841, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\urllib3\util\retry.py”, line 519, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host=‘api.materialsproject.org’, port=443): Max retries exceeded with url: /molecules/summary/?_limit=1000&_fields=redox%2Cmolecules%2Cmolecule_id&_skip=331000 (Caused by ReadTimeoutError(“HTTPSConnectionPool(host=‘api.materialsproject.org’, port=443): Read timed out. (read timeout=20)”))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\mp_api\client\core\client.py”, line 508, in _query_resource
return self._submit_requests(
^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\mp_api\client\core\client.py”, line 875, in _submit_requests
data_tuples = self._multi_thread(
^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\mp_api\client\core\client.py”, line 939, in _multi_thread
data, subtotal = future.result()
^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\concurrent\futures_base.py”, line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\concurrent\futures_base.py”, line 401, in __get_result
raise self._exception
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\concurrent\futures\thread.py”, line 59, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\mp_api\client\core\client.py”, line 990, in _submit_request_and_process
response = self.session.get(
^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\requests\sessions.py”, line 602, in get
return self.request(“GET”, url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\requests\sessions.py”, line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\requests\sessions.py”, line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\requests\adapters.py”, line 700, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host=‘api.materialsproject.org’, port=443): Max retries exceeded with url: /molecules/summary/?_limit=1000&_fields=redox%2Cmolecules%2Cmolecule_id&_skip=331000 (Caused by ReadTimeoutError(“HTTPSConnectionPool(host=‘api.materialsproject.org’, port=443): Read timed out. (read timeout=20)”))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “d:\pythonscripts\mpmoldownload.py”, line 14, in
docs=mpr.molecules.summary.search(fields=[‘redox’,‘molecules’,‘molecule_id’],charge=0,all_fields=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\mp_api\client\routes\molecules\summary.py”, line 131, in search
return super()._search(
^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\mp_api\client\core\client.py”, line 1191, in _search
return self._get_all_documents(
^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\mp_api\client\core\client.py”, line 1264, in _get_all_documents
results = self._query_resource(
^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\jinlujie.conda\envs\mpapi\Lib\site-packages\mp_api\client\core\client.py”, line 582, in _query_resource
raise MPRestError(str(ex))
mp_api.client.core.client.MPRestError: HTTPSConnectionPool(host=‘api.materialsproject.org’, port=443): Max retries exceeded with url: /molecules/summary/?_limit=1000&_fields=redox%2Cmolecules%2Cmolecule_id&_skip=331000 (Caused by ReadTimeoutError(“HTTPSConnectionPool(host=‘api.materialsproject.org’, port=443): Read timed out. (read timeout=20)”))

I have tried my codes under different network conditions, but the codes still failed. How can I deal with it?

@ganfisher Thanks for reaching out. The molecules dataset is pretty recent and unfortunately not yet as optimized for filtered retrieval as our other data collections. At over 0.5M documents, it’s also pretty large. Our recommendation would be to retrieve all the data via mpr.molecules.summary.search() (same mpr, no search arguments) and post-filter on your end. If you want files to work with, you can also retrieve the documents from our AWS OpenData repository directly using the AWS CLI. See the docs here and the data here. Make sure to use the latest version of mp-api if you’re going the mpr route. HTH.

PS: You’ve actually made us aware that we forgot to copy the molecules data along with our most recent releases on OpenData. That was causing the timeouts as the mpr client was going to our database instead of rerouting to OpenData downloads when you didn’t provide any arguments to search(). Thanks!

1 Like