"concurrent.futures" --- 並列タスク実行
***************************************

Added in version 3.2.

**ソースコード:** Lib/concurrent/futures/thread.py、
Lib/concurrent/futures/process.py および
Lib/concurrent/futures/interpreter.py

======================================================================

"concurrent.futures" モジュールは、非同期に実行できる呼び出し可能オブ
ジェクトの高水準のインターフェースを提供します。

The asynchronous execution can be performed with threads, using
"ThreadPoolExecutor" or "InterpreterPoolExecutor", or separate
processes, using "ProcessPoolExecutor". Each implements the same
interface, which is defined by the abstract "Executor" class.

"concurrent.futures.Future" must not be confused with
"asyncio.Future", which is designed for use with "asyncio" tasks and
coroutines. See the asyncio's Future documentation for a detailed
comparison of the two.

Availability: not WASI.

このモジュールは WebAssembly では動作しないか、利用不可です。詳しくは
、WebAssembly プラットフォーム を見てください。


Executor オブジェクト
=====================

class concurrent.futures.Executor

   非同期呼び出しを実行するためのメソッドを提供する抽象クラスです。こ
   のクラスを直接使ってはならず、具象サブクラスを介して使います。

   submit(fn, /, *args, **kwargs)

      呼び出し可能オブジェクト *fn* を、 "fn(*args, **kwargs)" として
      実行するようにスケジュールし、呼び出し可能オブジェクトの実行を表
      現する "Future" オブジェクトを返します。

         with ThreadPoolExecutor(max_workers=1) as executor:
             future = executor.submit(pow, 323, 1235)
             print(future.result())

   map(fn, *iterables, timeout=None, chunksize=1, buffersize=None)

      Similar to "map(fn, *iterables)" except:

      * The *iterables* are collected immediately rather than lazily,
        unless a *buffersize* is specified to limit the number of
        submitted tasks whose results have not yet been yielded. If
        the buffer is full, iteration over the *iterables* pauses
        until a result is yielded from the buffer.

      * *fn* is executed asynchronously and several calls to *fn* may
        be made concurrently.

      もし "__next__()" が呼ばれその結果が元々の "Executor.map()" の呼
      び出しから *timeout* 秒経った後も利用できない場合、返されるイテ
      レータは "TimeoutError" を送出します。*timeout* は整数または浮動
      小数点数です。もし *timeout* が指定されないか の場合、待ち時間に
      制限はありません。

      もし *fn* の呼び出しが例外を送出した場合、その例外はイテレータか
      ら値を受け取る時に送出されます。

      When using "ProcessPoolExecutor", this method chops *iterables*
      into a number of chunks which it submits to the pool as separate
      tasks.  The (approximate) size of these chunks can be specified
      by setting *chunksize* to a positive integer.  For very long
      iterables, using a large value for *chunksize* can significantly
      improve performance compared to the default size of 1.  With
      "ThreadPoolExecutor" and "InterpreterPoolExecutor", *chunksize*
      has no effect.

      バージョン 3.5 で変更: *chunksize* 引数が追加されました。

      バージョン 3.14 で変更: *buffersize* 引数が追加されました。

   shutdown(wait=True, *, cancel_futures=False)

      executor に対して、現在保留中のフューチャーが実行された後で、使
      用中のすべての資源を解放するように伝えます。シャットダウンにより
      後に "Executor.submit()" と "Executor.map()" を呼び出すと
      "RuntimeError" が送出されます。

      *wait* が "True" の場合、すべての未完了のフューチャの実行が完了
      して Executor に関連付けられたリソースが解放されるまで、このメソ
      ッドは返りません。 *wait* が "False" の場合、このメソッドはすぐ
      に返り、すべての未完了のフューチャの実行が完了したときに、
      Executor に関連付けられたリソースが解放されます。 *wait* の値に
      関係なく、すべての未完了のフューチャの実行が完了するまで Python
      プログラム全体は終了しません。

      If *cancel_futures* is "True", this method will cancel all
      pending futures that the executor has not started running. Any
      futures that are completed or running won't be cancelled,
      regardless of the value of *cancel_futures*.

      If both *cancel_futures* and *wait* are "True", all futures that
      the executor has started running will be completed prior to this
      method returning. The remaining futures are cancelled.

      You can avoid having to call this method explicitly if you use
      the executor as a *context manager* via the  "with" statement,
      which will shutdown the "Executor" (waiting as if
      "Executor.shutdown()" were called with *wait* set to "True"):

         import shutil
         with ThreadPoolExecutor(max_workers=4) as e:
             e.submit(shutil.copy, 'src1.txt', 'dest1.txt')
             e.submit(shutil.copy, 'src2.txt', 'dest2.txt')
             e.submit(shutil.copy, 'src3.txt', 'dest3.txt')
             e.submit(shutil.copy, 'src4.txt', 'dest4.txt')

      バージョン 3.9 で変更: *cancel_futures* が追加されました。


ThreadPoolExecutor
==================

"ThreadPoolExecutor" はスレッドのプールを使用して非同期に呼び出しを行
う、 "Executor" のサブクラスです。

"Future" に関連づけられた呼び出し可能オブジェクトが、別の "Future" の
結果を待つ時にデッドロックすることがあります。例:

   import time
   def wait_on_b():
       time.sleep(5)
       print(b.result())  # b will never complete because it is waiting on a.
       return 5

   def wait_on_a():
       time.sleep(5)
       print(a.result())  # a will never complete because it is waiting on b.
       return 6


   executor = ThreadPoolExecutor(max_workers=2)
   a = executor.submit(wait_on_b)
   b = executor.submit(wait_on_a)

以下でも同様です:

   def wait_on_future():
       f = executor.submit(pow, 5, 2)
       # This will never complete because there is only one worker thread and
       # it is executing this function.
       print(f.result())

   executor = ThreadPoolExecutor(max_workers=1)
   executor.submit(wait_on_future)

class concurrent.futures.ThreadPoolExecutor(max_workers=None, thread_name_prefix='', initializer=None, initargs=())

   最大で *max_workers* 個のスレッドを非同期実行に使う "Executor" のサ
   ブクラスです。

   All threads enqueued to "ThreadPoolExecutor" will be joined before
   the interpreter can exit. Note that the exit handler which does
   this is executed *before* any exit handlers added using "atexit".
   This means exceptions in the main thread must be caught and handled
   in order to signal threads to exit gracefully. For this reason, it
   is recommended that "ThreadPoolExecutor" not be used for long-
   running tasks.

   *initializer* is an optional callable that is called at the start
   of each worker thread; *initargs* is a tuple of arguments passed to
   the initializer.  Should *initializer* raise an exception, all
   currently pending jobs will raise a "BrokenThreadPool", as well as
   any attempt to submit more jobs to the pool.

   バージョン 3.5 で変更: *max_workers* が "None" か指定されない場合の
   デフォルト値はマシンのプロセッサの数に 5 を掛けたものになります。こ
   れは、 "ThreadPoolExecutor" は CPU の処理ではなく I/O をオーバーラ
   ップするのによく使用されるため、 "ProcessPoolExecutor" のワーカーの
   数よりもこのワーカーの数を増やすべきであるという想定に基づいていま
   す。

   バージョン 3.6 で変更: *thread_name_prefix* 引数が追加され、デバッ
   グしやすくなるようにプールから作られたワーカスレッド
   "threading.Thread" の名前を管理できるようになりました。

   バージョン 3.7 で変更: *initializer* と *initargs* 引数が追加されま
   した。

   バージョン 3.8 で変更: Default value of *max_workers* is changed to
   "min(32, os.cpu_count() + 4)". This default value preserves at
   least 5 workers for I/O bound tasks. It utilizes at most 32 CPU
   cores for CPU bound tasks which release the GIL. And it avoids
   using very large resources implicitly on many-core
   machines.ThreadPoolExecutor now reuses idle worker threads before
   starting *max_workers* worker threads too.

   バージョン 3.13 で変更: Default value of *max_workers* is changed
   to "min(32, (os.process_cpu_count() or 1) + 4)".


ThreadPoolExecutor の例
-----------------------

   import concurrent.futures
   import urllib.request

   URLS = ['http://www.foxnews.com/',
           'http://www.cnn.com/',
           'http://europe.wsj.com/',
           'http://www.bbc.co.uk/',
           'http://nonexistent-subdomain.python.org/']

   # Retrieve a single page and report the URL and contents
   def load_url(url, timeout):
       with urllib.request.urlopen(url, timeout=timeout) as conn:
           return conn.read()

   # We can use a with statement to ensure threads are cleaned up promptly
   with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
       # Start the load operations and mark each future with its URL
       future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
       for future in concurrent.futures.as_completed(future_to_url):
           url = future_to_url[future]
           try:
               data = future.result()
           except Exception as exc:
               print('%r generated an exception: %s' % (url, exc))
           else:
               print('%r page is %d bytes' % (url, len(data)))


InterpreterPoolExecutor
=======================

Added in version 3.14.

The "InterpreterPoolExecutor" class uses a pool of interpreters to
execute calls asynchronously.  It is a "ThreadPoolExecutor" subclass,
which means each worker is running in its own thread. The difference
here is that each worker has its own interpreter, and runs each task
using that interpreter.

The biggest benefit to using interpreters instead of only threads is
true multi-core parallelism.  Each interpreter has its own *Global
Interpreter Lock*, so code running in one interpreter can run on one
CPU core, while code in another interpreter runs unblocked on a
different core.

The tradeoff is that writing concurrent code for use with multiple
interpreters can take extra effort.  However, this is because it
forces you to be deliberate about how and when interpreters interact,
and to be explicit about what data is shared between interpreters.
This results in several benefits that help balance the extra effort,
including true multi-core parallelism,  For example, code written this
way can make it easier to reason about concurrency.  Another major
benefit is that you don't have to deal with several of the big pain
points of using threads, like race conditions.

Each worker's interpreter is isolated from all the other interpreters.
"Isolated" means each interpreter has its own runtime state and
operates completely independently.  For example, if you redirect
"sys.stdout" in one interpreter, it will not be automatically
redirected to any other interpreter.  If you import a module in one
interpreter, it is not automatically imported in any other.  You would
need to import the module separately in interpreter where you need it.
In fact, each module imported in an interpreter is a completely
separate object from the same module in a different interpreter,
including "sys", "builtins", and even "__main__".

Isolation means a mutable object, or other data, cannot be used by
more than one interpreter at the same time.  That effectively means
interpreters cannot actually share such objects or data.  Instead,
each interpreter must have its own copy, and you will have to
synchronize any changes between the copies manually.  Immutable
objects and data, like the builtin singletons, strings, and tuples of
immutable objects, don't have these limitations.

Communicating and synchronizing between interpreters is most
effectively done using dedicated tools, like those proposed in **PEP
734**.  One less efficient alternative is to serialize with "pickle"
and then send the bytes over a shared "socket" or "pipe".

class concurrent.futures.InterpreterPoolExecutor(max_workers=None, thread_name_prefix='', initializer=None, initargs=())

   A "ThreadPoolExecutor" subclass that executes calls asynchronously
   using a pool of at most *max_workers* threads.  Each thread runs
   tasks in its own interpreter.  The worker interpreters are isolated
   from each other, which means each has its own runtime state and
   that they can't share any mutable objects or other data.  Each
   interpreter has its own *Global Interpreter Lock*, which means code
   run with this executor has true multi-core parallelism.

   The optional *initializer* and *initargs* arguments have the same
   meaning as for "ThreadPoolExecutor": the initializer is run when
   each worker is created, though in this case it is run in the
   worker's interpreter.  The executor serializes the *initializer*
   and *initargs* using "pickle" when sending them to the worker's
   interpreter.

   注釈:

     The executor may replace uncaught exceptions from *initializer*
     with "ExecutionFailed".

   Other caveats from parent "ThreadPoolExecutor" apply here.

"submit()" and "map()" work like normal, except the worker serializes
the callable and arguments using "pickle" when sending them to its
interpreter.  The worker likewise serializes the return value when
sending it back.

When a worker's current task raises an uncaught exception, the worker
always tries to preserve the exception as-is.  If that is successful
then it also sets the "__cause__" to a corresponding "ExecutionFailed"
instance, which contains a summary of the original exception. In the
uncommon case that the worker is not able to preserve the original as-
is then it directly preserves the corresponding "ExecutionFailed"
instance instead.


ProcessPoolExecutor
===================

"ProcessPoolExecutor"  はプロセスプールを使って非同期呼び出しを実施す
る "Executor" のサブクラスです。"ProcessPoolExecutor" は
"multiprocessing" モジュールを利用します。このため *Global Interpreter
Lock* を回避することができますが、pickle 化できるオブジェクトしか実行
したり返したりすることができません。

"__main__" モジュールはワーカサブプロセスでインポート可能でなければな
りません。 すなわち、 "ProcessPoolExecutor" は対話的インタープリタでは
動きません。

"ProcessPoolExecutor" に渡された呼び出し可能オブジェクトから
"Executor" や "Future" メソッドを呼ぶとデッドロックに陥ります。

Note that the restrictions on functions and arguments needing to
picklable as per "multiprocessing.Process" apply when using "submit()"
and "map()" on a "ProcessPoolExecutor". A function defined in a REPL
or a lambda should not be expected to work.

class concurrent.futures.ProcessPoolExecutor(max_workers=None, mp_context=None, initializer=None, initargs=(), max_tasks_per_child=None)

   An "Executor" subclass that executes calls asynchronously using a
   pool of at most *max_workers* processes.  If *max_workers* is
   "None" or not given, it will default to "os.process_cpu_count()".
   If *max_workers* is less than or equal to "0", then a "ValueError"
   will be raised. On Windows, *max_workers* must be less than or
   equal to "61". If it is not then "ValueError" will be raised. If
   *max_workers* is "None", then the default chosen will be at most
   "61", even if more processors are available. *mp_context* can be a
   "multiprocessing" context or "None". It will be used to launch the
   workers. If *mp_context* is "None" or not given, the default
   "multiprocessing" context is used. See コンテキストと開始方式.

   *initializer* is an optional callable that is called at the start
   of each worker process; *initargs* is a tuple of arguments passed
   to the initializer.  Should *initializer* raise an exception, all
   currently pending jobs will raise a "BrokenProcessPool", as well as
   any attempt to submit more jobs to the pool.

   *max_tasks_per_child* is an optional argument that specifies the
   maximum number of tasks a single process can execute before it will
   exit and be replaced with a fresh worker process. By default
   *max_tasks_per_child* is "None" which means worker processes will
   live as long as the pool. When a max is specified, the "spawn"
   multiprocessing start method will be used by default in absence of
   a *mp_context* parameter. This feature is incompatible with the
   "fork" start method.

   注釈:

     Bugs have been reported when using the *max_tasks_per_child*
     feature that can result in the "ProcessPoolExecutor" hanging in
     some circumstances. Follow its eventual resolution in gh-115634.

   バージョン 3.3 で変更: ワーカプロセスの1つが突然終了した場合、
   "BrokenProcessPool" エラーが送出されるようになりました。 以前は挙動
   は未定義でしたが、 executor や  futures がフリーズしたりデッドロッ
   クを起こすことがしばしばでした。

   バージョン 3.7 で変更: The *mp_context* argument was added to allow
   users to control the start_method for worker processes created by
   the pool.*initializer* と *initargs* 引数が追加されました。

   バージョン 3.11 で変更: The *max_tasks_per_child* argument was
   added to allow users to control the lifetime of workers in the
   pool.

   バージョン 3.12 で変更: On POSIX systems, if your application has
   multiple threads and the "multiprocessing" context uses the
   ""fork"" start method: The "os.fork()" function called internally
   to spawn workers may raise a "DeprecationWarning". Pass a
   *mp_context* configured to use a different start method. See the
   "os.fork()" documentation for further explanation.

   バージョン 3.13 で変更: *max_workers* uses "os.process_cpu_count()"
   by default, instead of "os.cpu_count()".

   バージョン 3.14 で変更: The default process start method (see コン
   テキストと開始方式) changed away from *fork*. If you require the
   *fork* start method for "ProcessPoolExecutor" you must explicitly
   pass "mp_context=multiprocessing.get_context("fork")".

   terminate_workers()

      Attempt to terminate all living worker processes immediately by
      calling "Process.terminate" on each of them. Internally, it will
      also call "Executor.shutdown()" to ensure that all other
      resources associated with the executor are freed.

      After calling this method the caller should no longer submit
      tasks to the executor.

      Added in version 3.14.

   kill_workers()

      Attempt to kill all living worker processes immediately by
      calling "Process.kill" on each of them. Internally, it will also
      call "Executor.shutdown()" to ensure that all other resources
      associated with the executor are freed.

      After calling this method the caller should no longer submit
      tasks to the executor.

      Added in version 3.14.


ProcessPoolExecutor の例
------------------------

   import concurrent.futures
   import math

   PRIMES = [
       112272535095293,
       112582705942171,
       112272535095293,
       115280095190773,
       115797848077099,
       1099726899285419]

   def is_prime(n):
       if n < 2:
           return False
       if n == 2:
           return True
       if n % 2 == 0:
           return False

       sqrt_n = int(math.floor(math.sqrt(n)))
       for i in range(3, sqrt_n + 1, 2):
           if n % i == 0:
               return False
       return True

   def main():
       with concurrent.futures.ProcessPoolExecutor() as executor:
           for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
               print('%d is prime: %s' % (number, prime))

   if __name__ == '__main__':
       main()


Future オブジェクト
===================

"Future" クラスは呼び出し可能オブジェクトの非同期実行をカプセル化しま
す。 "Future" のインスタンスは "Executor.submit()" によって生成されま
す。

class concurrent.futures.Future

   呼び出し可能オブジェクトの非同期実行をカプセル化します。 "Future"
   インスタンスは "Executor.submit()" で生成され、テストを除いて直接生
   成すべきではありません。

   cancel()

      Attempt to cancel the call.  If the call is currently being
      executed or finished running and cannot be cancelled then the
      method will return "False", otherwise the call will be cancelled
      and the method will return "True".

   cancelled()

      呼び出しが正常にキャンセルされた場合 "True" を返します。

   running()

      現在呼び出しが実行中でキャンセルできない場合 "True" を返します。

   done()

      呼び出しが正常にキャンセルされたか終了した場合 "True" を返します
      。

   result(timeout=None)

      呼び出しによって返された値を返します。もし呼び出しがまだ完了して
      いなければ、このメソッドは *timeout* 秒の間、待機します。もし呼
      び出しが *timeout* 秒間の間に完了しなければ、 "TimeoutError" が
      送出されます。 *timeout* はintかfloatを指定できます。もし
      *timeout* が指定されていないか、 "None" であれば、待機時間に制限
      はありません。

      future が完了する前にキャンセルされた場合 "CancelledError" が送
      出されます。

      If the call raised an exception, this method will raise the same
      exception.

   exception(timeout=None)

      呼び出しによって送出された例外を返します。もし呼び出しがまだ完了
      されていなければ、このメソッドは *timeout* 秒だけ待機します。も
      し呼び出しが *timeout* 秒の間に完了しなければ、 "TimeoutError"
      が送出されます。 *timeout* にはintかfloatを指定できます。
      *timeout* が指定されていないか、 "None" であれば、待機時間に制限
      はありません。

      future が完了する前にキャンセルされた場合 "CancelledError" が送
      出されます。

      呼び出しが例外を送出することなく完了した場合、"None" を返します
      。

   add_done_callback(fn)

      呼び出し可能な *fn* オブジェクトを future にアタッチします。
      futureがキャンセルされたか、実行を終了した際に、future をそのた
      だ一つの引数として *fn* が呼び出されます。

      追加された呼び出し可能オブジェクトは、追加された順番で呼びだされ
      、追加を行ったプロセスに属するスレッド中で呼び出されます。もし呼
      び出し可能オブジェクトが "Exception" のサブクラスを送出した場合
      、それはログに記録され無視されます。呼び出し可能オブジェクトが
      "BaseException" のサブクラスを送出した場合の動作は未定義です。

      もしfutureがすでに完了しているか、キャンセル済みであれば、*fn*
      は即座に実行されます。

   以下の "Future" メソッドは、ユニットテストでの使用と "Executor" を
   実装することを意図しています。

   set_running_or_notify_cancel()

      このメソッドは、"Future" に関連付けられたワークやユニットテスト
      によるワークの実行前に、 "Executor" の実装によってのみ呼び出して
      ください。

      このメソッドが "False" を返す場合、 "Future" はキャンセルされて
      います。つまり、 "Future.cancel()" が呼び出されて "True" が返っ
      ています。"Future" の完了を  ("as_completed()" または "wait()"
      により)  待機するすべてのスレッドが起動します。

      このメソッドが "True" を返す場合、 "Future" はキャンセルされて、
      実行状態に移行されています。つまり、 "Future.running()" を呼び出
      すと "True" が返ります。

      このメソッドは、一度だけ呼び出すことができ、
      "Future.set_result()" または "Future.set_exception()" がキャンセ
      ルされた後には呼び出すことができません。

   set_result(result)

      "Future" に関連付けられたワークの結果を *result* に設定します。

      このメソッドは、 "Executor" の実装またはユニットテストによっての
      み使用してください。

      バージョン 3.8 で変更: This method raises
      "concurrent.futures.InvalidStateError" if the "Future" is
      already done.

   set_exception(exception)

      "Future" に関連付けられたワークの結果を "Exception" *exception*
      に設定します。

      このメソッドは、 "Executor" の実装またはユニットテストによっての
      み使用してください。

      バージョン 3.8 で変更: This method raises
      "concurrent.futures.InvalidStateError" if the "Future" is
      already done.


モジュール関数
==============

concurrent.futures.wait(fs, timeout=None, return_when=ALL_COMPLETED)

   Wait for the "Future" instances (possibly created by different
   "Executor" instances) given by *fs* to complete. Duplicate futures
   given to *fs* are removed and will be returned only once. Returns a
   named 2-tuple of sets.  The first set, named "done", contains the
   futures that completed (finished or cancelled futures) before the
   wait completed.  The second set, named "not_done", contains the
   futures that did not complete (pending or running futures).

   *timeout* で結果を返すまで待機する最大秒数を指定できます。*timeout*
   は整数か浮動小数点数をとります。*timeout* が指定されないか "None"
   の場合、無期限に待機します。

   *return_when* でこの関数がいつ結果を返すか指定します。指定できる値
   は以下の 定数のどれか一つです:

   +----------------------------------------------------+----------------------------------------------------+
   | 定数                                               | 説明                                               |
   |====================================================|====================================================|
   | concurrent.futures.FIRST_COMPLETED                 | いずれかのフューチャが終了したかキャンセルされたと |
   |                                                    | きに返します。                                     |
   +----------------------------------------------------+----------------------------------------------------+
   | concurrent.futures.FIRST_EXCEPTION                 | The function will return when any future finishes  |
   |                                                    | by raising an exception. If no future raises an    |
   |                                                    | exception then it is equivalent to                 |
   |                                                    | "ALL_COMPLETED".                                   |
   +----------------------------------------------------+----------------------------------------------------+
   | concurrent.futures.ALL_COMPLETED                   | すべてのフューチャが終了したかキャンセルされたとき |
   |                                                    | に返します。                                       |
   +----------------------------------------------------+----------------------------------------------------+

concurrent.futures.as_completed(fs, timeout=None)

   Returns an iterator over the "Future" instances (possibly created
   by different "Executor" instances) given by *fs* that yields
   futures as they complete (finished or cancelled futures). Any
   futures given by *fs* that are duplicated will be returned once.
   Any futures that completed before "as_completed()" is called will
   be yielded first.  The returned iterator raises a "TimeoutError" if
   "__next__()" is called and the result isn't available after
   *timeout* seconds from the original call to "as_completed()".
   *timeout* can be an int or float. If *timeout* is not specified or
   "None", there is no limit to the wait time.

参考:

  **PEP 3148** -- futures - execute computations asynchronously
     この機能を Python 標準ライブラリに含めることを述べた提案です。


例外クラス
==========

exception concurrent.futures.CancelledError

   future がキャンセルされたときに送出されます。

exception concurrent.futures.TimeoutError

   A deprecated alias of "TimeoutError", raised when a future
   operation exceeds the given timeout.

   バージョン 3.11 で変更: このクラスは "TimeoutError" のエイリアスに
   なりました。

exception concurrent.futures.BrokenExecutor

   Derived from "RuntimeError", this exception class is raised when an
   executor is broken for some reason, and cannot be used to submit or
   execute new tasks.

   Added in version 3.7.

exception concurrent.futures.InvalidStateError

   Raised when an operation is performed on a future that is not
   allowed in the current state.

   Added in version 3.8.

exception concurrent.futures.thread.BrokenThreadPool

   Derived from "BrokenExecutor", this exception class is raised when
   one of the workers of a "ThreadPoolExecutor" has failed
   initializing.

   Added in version 3.7.

exception concurrent.futures.interpreter.BrokenInterpreterPool

   Derived from "BrokenThreadPool", this exception class is raised
   when one of the workers of a "InterpreterPoolExecutor" has failed
   initializing.

   Added in version 3.14.

exception concurrent.futures.process.BrokenProcessPool

   Derived from "BrokenExecutor" (formerly "RuntimeError"), this
   exception class is raised when one of the workers of a
   "ProcessPoolExecutor" has terminated in a non-clean fashion (for
   example, if it was killed from the outside).

   Added in version 3.3.
