"multiprocessing" --- Paralelismo baseado em processo
*****************************************************

**Código-fonte:** Lib/multiprocessing/

======================================================================


Introdução
==========

"multiprocessing" é um pacote que suporta invocação de processos
utilizando uma API semelhante ao módulo "threading". O pacote
"multiprocessing" oferece simultaneamente concorrência local e remota,
efetivamente contornando o *Bloqueio Global do Interpretador*, ao
utilizar subprocessos ao invés de threads. Devido a isso, o módulo
"multiprocessing" permite ao programador aproveitar totalmente os
múltiplos processadores de uma máquina. Ele funciona tanto em Unix
como em Windows.

O módulo "multiprocessing" também introduz APIs que não têm análogos
no módulo "threading". Um exemplo principal disso é o objeto "Pool"
que oferece um meio conveniente de paralelizar a execução de uma
função em vários valores de entrada, distribuindo os dados de entrada
entre processos (paralelismo de dados). O exemplo a seguir demonstra a
prática comum de definir tais funções em um módulo para que os
processos filhos possam importar esse módulo com sucesso. Este exemplo
básico de paralelismo de dados usando "Pool",

   from multiprocessing import Pool

   def f(x):
       return x*x

   if __name__ == '__main__':
       with Pool(5) as p:
           print(p.map(f, [1, 2, 3]))

vai exibir na saída padrão

   [1, 4, 9]


A classe "Process"
------------------

Em "multiprocessing", os processos são gerados criando um objeto
"Process" e então chamando seu método "start()". "Process" segue a API
de "threading.Thread". Um exemplo trivial de um programa multiprocesso
é

   from multiprocessing import Process

   def f(name):
       print('hello', name)

   if __name__ == '__main__':
       p = Process(target=f, args=('bob',))
       p.start()
       p.join()

Para mostrar os IDs de processo individuais envolvidos, aqui está um
exemplo expandido:

   from multiprocessing import Process
   import os

   def info(title):
       print(title)
       print('module name:', __name__)
       print('parent process:', os.getppid())
       print('process id:', os.getpid())

   def f(name):
       info('function f')
       print('hello', name)

   if __name__ == '__main__':
       info('main line')
       p = Process(target=f, args=('bob',))
       p.start()
       p.join()

Para uma explicação do porquê a parte "if __name__ == '__main__'" é
necessária, veja Programming guidelines.


Contextos e métodos de inicialização
------------------------------------

Dependendo da plataforma, "multiprocessing" suporta três maneiras de
iniciar um processo. Estes *métodos de início* são

   *spawn*
      The parent process starts a fresh python interpreter process.
      The child process will only inherit those resources necessary to
      run the process object's "run()" method.  In particular,
      unnecessary file descriptors and handles from the parent process
      will not be inherited.  Starting a process using this method is
      rather slow compared to using *fork* or *forkserver*.

      Disponível para Unix e Windows. O padrão no Windows e macOS.

   *fork*
      O processo pai usa "os.fork()" para fazer um fork do
      interpretador Python. O processo filho, quando começa, é
      efetivamente idêntico ao processo pai. Todos os recursos do pai
      são herdados pelo processo filho. Observe que fazer um fork com
      segurança de um processo multithread é problemático.

      Disponível apenas para Unix. O padrão no Unix.

   *forkserver*
      Quando o programa é inicializado e seleciona o método de início
      *forkserver*, um processo de servidor é inicializado. A partir
      disso, sempre que um novo processo é necessário, o processo pai
      conecta-se ao servidor e solicita que um novo processo seja
      feito im fork. O fork do processo do servidor é de thread único,
      então é seguro utilizar "os.fork()". Nenhum recurso
      desnecessário é herdado.

      Disponível em plataformas Unix que suportam a passagem de
      descritores de arquivo em Unix pipes.

Alterado na versão 3.8: No macOS, o método de início *spawn* agora é o
padrão. O método de início *fork* deve ser considerado inseguro, pois
pode levar a travamentos do subprocesso. Veja bpo-33725.

Alterado na versão 3.4: *spawn* added on all unix platforms, and
*forkserver* added for some unix platforms. Child processes no longer
inherit all of the parents inheritable handles on Windows.

On Unix using the *spawn* or *forkserver* start methods will also
start a *resource tracker* process which tracks the unlinked named
system resources (such as named semaphores or "SharedMemory" objects)
created by processes of the program.  When all processes have exited
the resource tracker unlinks any remaining tracked object. Usually
there should be none, but if a process was killed by a signal there
may be some "leaked" resources.  (Neither leaked semaphores nor shared
memory segments will be automatically unlinked until the next reboot.
This is problematic for both objects because the system allows only a
limited number of named semaphores, and shared memory segments occupy
some space in the main memory.)

Para selecionar um método de início, você usa "set_start_method()" na
cláusula "if __name__ == '__main__'" do módulo principal. Por exemplo:

   import multiprocessing as mp

   def foo(q):
       q.put('hello')

   if __name__ == '__main__':
       mp.set_start_method('spawn')
       q = mp.Queue()
       p = mp.Process(target=foo, args=(q,))
       p.start()
       print(q.get())
       p.join()

"set_start_method()" não deve ser usada mais de uma vez no programa.

Alternativamente, você pode usar "get_context()" para obter um objeto
de contexto. Objetos de contexto têm a mesma API que o módulo
multiprocessing e permitem que se usem vários métodos de início no
mesmo programa.

   import multiprocessing as mp

   def foo(q):
       q.put('hello')

   if __name__ == '__main__':
       ctx = mp.get_context('spawn')
       q = ctx.Queue()
       p = ctx.Process(target=foo, args=(q,))
       p.start()
       print(q.get())
       p.join()

Note que objetos relacionados a um contexto podem não ser compatíveis
com processos para um contexto diferente. Em particular, travas
criadas usando o contexto *fork* não podem ser passados ​​para
processos iniciados usando os métodos de início *spawn* ou
*forkserver*.

Uma biblioteca que deseja utilizar um método de início específico
provavelmente deve utilizar "get_context()" para evitar interferir na
escolha do usuário.

Aviso:

  The "'spawn'" and "'forkserver'" start methods cannot currently be
  used with "frozen" executables (i.e., binaries produced by packages
  like **PyInstaller** and **cx_Freeze**) on Unix. The "'fork'" start
  method does work.


Trocando objetos entre processos
--------------------------------

"multiprocessing" tem suporte a dois tipos de canal de comunicação
entre processos:

**Filas**

   A classe "Queue" é quase um clone de "queue.Queue". Por exemplo:

      from multiprocessing import Process, Queue

      def f(q):
          q.put([42, None, 'hello'])

      if __name__ == '__main__':
          q = Queue()
          p = Process(target=f, args=(q,))
          p.start()
          print(q.get())    # prints "[42, None, 'hello']"
          p.join()

   Queues are thread and process safe.

**Encadeamentos**

   A função "Pipe()" retorna um par de objetos de conexão conectados
   por um encadeamento que por padrão é duplex (bidirecional). Por
   exemplo:

      from multiprocessing import Process, Pipe

      def f(conn):
          conn.send([42, None, 'hello'])
          conn.close()

      if __name__ == '__main__':
          parent_conn, child_conn = Pipe()
          p = Process(target=f, args=(child_conn,))
          p.start()
          print(parent_conn.recv())   # prints "[42, None, 'hello']"
          p.join()

   Os dois objetos de conexão retornados por "Pipe()" representam as
   duas extremidades do encadeamento. Cada objeto de conexão tem os
   métodos "send()" e "recv()" (entre outros). Observe que os dados em
   um encadeamento podem ser corrompidos se dois processos (ou
   threads) tentarem ler ou gravar na *mesma* extremidade do
   encadeamento ao mesmo tempo. Claro que não há risco de corrupção de
   processos usando extremidades diferentes do encadeamento ao mesmo
   tempo.


Sincronização entre processos
-----------------------------

"multiprocessing" contém equivalentes de todas as primitivas de
sincronização de "threading". Por exemplo, pode-se usar uma trava para
garantir que apenas um processo exiba na saída padrão por vez:

   from multiprocessing import Process, Lock

   def f(l, i):
       l.acquire()
       try:
           print('hello world', i)
       finally:
           l.release()

   if __name__ == '__main__':
       lock = Lock()

       for num in range(10):
           Process(target=f, args=(lock, num)).start()

Sem utilizar a saída da trava dos diferentes processos, é possível que
tudo fique confuso.


Compartilhando estado entre processos
-------------------------------------

Conforme mencionado acima, ao fazer programação concorrente,
geralmente é melhor evitar o uso de estado compartilhado, tanto quanto
possível. Isso é particularmente verdadeiro ao utilizar múltiplos
processos.

No entanto, se você realmente precisa utilizar algum compartilhamento
de dados, então "multiprocessing" fornece algumas maneiras de se fazer
isso.

**Memória compartilhada**

   Os dados podem ser armazenados em um mapa de memória compartilhado
   utilizando "Value" ou "Array". Por exemplo, o código a seguir

      from multiprocessing import Process, Value, Array

      def f(n, a):
          n.value = 3.1415927
          for i in range(len(a)):
              a[i] = -a[i]

      if __name__ == '__main__':
          num = Value('d', 0.0)
          arr = Array('i', range(10))

          p = Process(target=f, args=(num, arr))
          p.start()
          p.join()

          print(num.value)
          print(arr[:])

   vai exibir

      3.1415927
      [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]

   Os argumentos "'d'" e "'i'" usados ​​ao criar "num" e "arr" são
   typecodes do tipo usado pelo módulo "array": "'d'" indica um ponto
   flutuante de precisão dupla e "'i'" indica um inteiro com sinal.
   Esses objetos compartilhados serão seguros para thread e processo.

   Para mais flexibilidade no uso de memória compartilhada, pode-se
   utilizar o módulo "multiprocessing.sharedctypes", que suporta a
   criação de objetos ctypes arbritários alocados da memória
   compartilhada.

**Processo de servidor**

   Um objeto gerenciador retornado por "Manager()" controla um
   processo de servidor que contém objetos Python e permite que outros
   processos os manipulem usando proxies.

   Um gerenciador retornado por "Manager()" vai oferecer suporte aos
   tipos "list", "dict", "Namespace", "Lock", "RLock", "Semaphore",
   "BoundedSemaphore", "Condition", "Event", "Barrier", "Queue",
   "Value" e "Array". Por exemplo,

      from multiprocessing import Process, Manager

      def f(d, l):
          d[1] = '1'
          d['2'] = 2
          d[0.25] = None
          l.reverse()

      if __name__ == '__main__':
          with Manager() as manager:
              d = manager.dict()
              l = manager.list(range(10))

              p = Process(target=f, args=(d, l))
              p.start()
              p.join()

              print(d)
              print(l)

   vai exibir

      {0.25: None, 1: '1', '2': 2}
      [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]

   Os gerenciadores de processos de servidor são mais flexíveis do que
   usar objetos de memória compartilhada porque podem ser feitos para
   oferecer suporte a tipos de objetos arbitrários. Além disso, um
   único gerenciador pode ser compartilhado por processos em
   diferentes computadores em uma rede. Eles são, no entanto, mais
   lentos do que usar memória compartilhada.


Usando um pool de workers
-------------------------

A classe "Pool" representa um pool de processos de worker. Ela tem
métodos que permitem que tarefas sejam descarregadas para os processos
de worker de algumas maneiras diferentes.

Por exemplo:

   from multiprocessing import Pool, TimeoutError
   import time
   import os

   def f(x):
       return x*x

   if __name__ == '__main__':
       # start 4 worker processes
       with Pool(processes=4) as pool:

           # print "[0, 1, 4,..., 81]"
           print(pool.map(f, range(10)))

           # print same numbers in arbitrary order
           for i in pool.imap_unordered(f, range(10)):
               print(i)

           # evaluate "f(20)" asynchronously
           res = pool.apply_async(f, (20,))      # runs in *only* one process
           print(res.get(timeout=1))             # prints "400"

           # evaluate "os.getpid()" asynchronously
           res = pool.apply_async(os.getpid, ()) # runs in *only* one process
           print(res.get(timeout=1))             # prints the PID of that process

           # launching multiple evaluations asynchronously *may* use more processes
           multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
           print([res.get(timeout=1) for res in multiple_results])

           # make a single worker sleep for 10 secs
           res = pool.apply_async(time.sleep, (10,))
           try:
               print(res.get(timeout=1))
           except TimeoutError:
               print("We lacked patience and got a multiprocessing.TimeoutError")

           print("For the moment, the pool remains available for more work")

       # exiting the 'with'-block has stopped the pool
       print("Now the pool is closed and no longer available")

Observe que os métodos de um pool só devem ser usados ​​pelo processo
que o criou.

Nota:

  A funcionalidade dentro deste pacote requer que o módulo "__main__"
  seja importável pelos filhos. Isso é abordado em Programming
  guidelines, mas vale a pena apontar aqui. Isso significa que alguns
  exemplos, como os exemplos "multiprocessing.pool.Pool" não
  funcionarão no interpretador interativo. Por exemplo:

     >>> from multiprocessing import Pool
     >>> p = Pool(5)
     >>> def f(x):
     ...     return x*x
     ...
     >>> with p:
     ...   p.map(f, [1,2,3])
     Process PoolWorker-1:
     Process PoolWorker-2:
     Process PoolWorker-3:
     Traceback (most recent call last):
     Traceback (most recent call last):
     Traceback (most recent call last):
     AttributeError: 'module' object has no attribute 'f'
     AttributeError: 'module' object has no attribute 'f'
     AttributeError: 'module' object has no attribute 'f'

  (Se você tentar isso, na verdade, serão gerados três tracebaks
  completos intercalados de forma semi-aleatória, e então você pode
  ter que interromper o processo pai de alguma forma.)


Referência
==========

O pacote "multiprocessing" replica principalmente a API do módulo
"threading".


"Process" e exceções
--------------------

class multiprocessing.Process(group=None, target=None, name=None, args=(), kwargs={}, *, daemon=None)

   Objetos processo representam atividades que são executadas em um
   processo separado. A classe "Process" possui equivalentes de todos
   os métodos de "threading.Thread".

   The constructor should always be called with keyword arguments.
   *group* should always be "None"; it exists solely for compatibility
   with "threading.Thread".  *target* is the callable object to be
   invoked by the "run()" method.  It defaults to "None", meaning
   nothing is called. *name* is the process name (see "name" for more
   details). *args* is the argument tuple for the target invocation.
   *kwargs* is a dictionary of keyword arguments for the target
   invocation.  If provided, the keyword-only *daemon* argument sets
   the process "daemon" flag to "True" or "False".  If "None" (the
   default), this flag will be inherited from the creating process.

   Por padrão, nenhum argumento é passado para *target*.

   Se uma subclasse substitui o construtor, ela deve certificar-se de
   invocar o construtor da classe base ("Process.__init__()") antes de
   fazer qualquer outra coisa no processo.

   Alterado na versão 3.3: Adicionado o argumento *daemon*.

   run()

      Método que representa a atividade do processo.

      Você pode substituir esse método em uma subclasse. O método
      padrão "run()" invoca o objeto chamável passado ao construtor do
      objeto como o argumento alvo, se houver, com argumentos nomeados
      e sequenciais retirados dos argumentos *args* e *kwargs*,
      respectivamente.

   start()

      Inicia a atividade do processo.

      Isso deve ser chamado no máximo uma vez por objeto processo. Ele
      organiza para que o método "run()" do objeto seja invocado em um
      processo separado.

   join([timeout])

      Se o argumento opcional *timeout* for "None" (o padrão), o
      método bloqueia até que o processo cujo método "join()" é
      chamado termine. Se *timeout* for um número positivo, ele
      bloqueia no máximo *timeout* segundos. Observe que o método
      retorna "None" se seu processo terminar ou se o método tiver
      tempo limite. Verifique o "exitcode" do processo para determinar
      se ele terminou.

      Um processo pode ser usar "join" muitas vezes.

      Um processo não pode se unir porque isso causaria um impasse. É
      um erro tentar se unir a um processo antes que ele tenha sido
      iniciado.

   name

      O nome do processo. O nome é uma string usada apenas para fins
      de identificação. Não tem semântica. Vários processos podem
      receber o mesmo nome.

      O nome inicial é definido pelo construtor. Se nenhum nome
      explícito for fornecido ao construtor, um nome do formato
      'Processo-N_1:N_2:...:N_k' é construído, onde cada N_k é o
      N-ésimo filho de seu pai.

   is_alive()

      Retorna se o processo está ativo.

      Em termos gerais, um objeto processo está ativo desde o momento
      em que o método "start()" retorna até o término do processo
      filho.

   daemon

      O sinalizador daemon do processo, um valor Booleano. Isso deve
      ser definido antes de "start()" ser chamado.

      O valor inicial é herdado do processo de criação.

      Quando um processo sai, ele tenta encerrar todos os seus
      processos filhos daemônicos.

      Note que um processo daemônico não tem permissão para criar
      processos filhos. Caso contrário, um processo daemônico deixaria
      seus filhos órfãos se ele fosse encerrado quando seu processo
      pai saísse. Além disso, esses **não** são daemons ou serviços
      Unix, eles são processos normais que serão encerrados (e em vez
      de usar "join") se processos não daemônicos tiverem saído.

   Além da API "threading.Thread", os objetos "Process" também
   oferecem suporte aos seguintes atributos e métodos:

   pid

      Retorna o ID do processo. Antes do processo ser gerado, este
      será "None".

   exitcode

      The child's exit code.  This will be "None" if the process has
      not yet terminated.  A negative value *-N* indicates that the
      child was terminated by signal *N*.

   authkey

      A chave de autenticação do processo (uma string de bytes).

      Quando "multiprocessing" é inicializado, o processo principal
      recebe uma string aleatória usando "os.urandom()".

      Quando um objeto "Process" é criado, ele herda a chave de
      autenticação do seu processo pai, embora isso possa ser alterado
      definindo "authkey" para outra sequência de bytes.

      Veja Authentication keys.

   sentinel

      Um identificador numérico de um objeto do sistema que ficará
      "pronto" quando o processo terminar.

      You can use this value if you want to wait on several events at
      once using "multiprocessing.connection.wait()".  Otherwise
      calling "join()" is simpler.

      On Windows, this is an OS handle usable with the
      "WaitForSingleObject" and "WaitForMultipleObjects" family of API
      calls.  On Unix, this is a file descriptor usable with
      primitives from the "select" module.

      Novo na versão 3.3.

   terminate()

      Terminate the process.  On Unix this is done using the "SIGTERM"
      signal; on Windows "TerminateProcess()" is used.  Note that exit
      handlers and finally clauses, etc., will not be executed.

      Observe que os processos descendentes do processo *não* serão
      encerrados — eles simplesmente ficarão órfãos.

      Aviso:

        Se esse método for usado quando o processo associado estiver
        usando um encadeamento ou fila, então o encadeamento ou fila é
        passível de ser corrompido e pode se tornar inutilizável por
        outro processo. Similarmente, se o processo adquiriu um trava
        ou semáforo etc., então encerrá-lo é passível de causar
        impasse em outros processos.

   kill()

      Same as "terminate()" but using the "SIGKILL" signal on Unix.

      Novo na versão 3.7.

   close()

      Fecha o objeto "Process", liberando todos os recursos associados
      a ele. "ValueError" é levantado se o processo subjacente ainda
      estiver em execução. Uma vez que "close()" retorne com sucesso,
      a maioria dos outros métodos e atributos do objeto "Process"
      levantará "ValueError".

      Novo na versão 3.7.

   Observe que os métodos "start()", "join()", "is_alive()",
   "terminate()" e "exitcode" devem ser chamados somente pelo processo
   que criou o objeto processo.

   Exemplo de uso de alguns dos métodos de "Process":

       >>> import multiprocessing, time, signal
       >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
       >>> print(p, p.is_alive())
       <Process ... initial> False
       >>> p.start()
       >>> print(p, p.is_alive())
       <Process ... started> True
       >>> p.terminate()
       >>> time.sleep(0.1)
       >>> print(p, p.is_alive())
       <Process ... stopped exitcode=-SIGTERM> False
       >>> p.exitcode == -signal.SIGTERM
       True

exception multiprocessing.ProcessError

   A classe base de todas as exceções de "multiprocessing".

exception multiprocessing.BufferTooShort

   Exception raised by "Connection.recv_bytes_into()" when the
   supplied buffer object is too small for the message read.

   Se "e" for uma instância de "BufferTooShort", então "e.args[0]"
   retornará a mensagem como uma string de bytes.

exception multiprocessing.AuthenticationError

   Levantada quando há um erro de autenticação.

exception multiprocessing.TimeoutError

   Levantada por métodos com um tempo limite quando o tempo limite
   expira.


Encadeamentos e filas
---------------------

Ao usar vários processos, geralmente é usada a passagem de mensagens
para comunicação entre processos e evita-se ter que usar quaisquer
primitivas de sincronização, como travas.

Para passar mensagens, pode-se usar "Pipe()" (para uma conexão entre
dois processos) ou uma fila (que permite múltiplos produtores e
consumidores).

Os tipos "Queue", "SimpleQueue" e "JoinableQueue" são filas FIFO
(first-in, first-out) multiprodutoras e multiconsumidoras modeladas na
classe "queue.Queue" da biblioteca padrão. Elas diferem porque "Queue"
não tem os métodos "task_done()" e "join()" introduzidos na classe
"queue.Queue" do Python 2.5.

Se você usar "JoinableQueue", então você **deve** chamar
"JoinableQueue.task_done()" para cada tarefa removida da fila, caso
contrário, o semáforo usado para contar o número de tarefas não
concluídas pode eventualmente transbordar, levantando uma exceção.

Observe que também é possível criar uma fila compartilhada usando um
objeto gerenciador — veja Gerenciadores.

Nota:

  "multiprocessing" usa as exceções usuais "queue.Empty" e
  "queue.Full" para sinalizar um tempo limite. Elas não estão
  disponíveis no espaço de nomes do "multiprocessing", então você
  precisa importá-las de "queue".

Nota:

  Quando um objeto é colocado em uma fila, o objeto é serializado com
  pickle e uma thread em segundo plano depois descarrega os dados
  serializados com pickle para um encadeamento subjacente. Isso tem
  algumas consequências que são um pouco surpreendentes, mas não devem
  causar nenhuma dificuldade prática -- se elas realmente o incomodam,
  então você pode usar uma fila criada com um gerenciador.

  1. Depois de colocar um objeto em uma fila vazia, pode haver um
     atraso infinitesimal antes que o método "empty()" da fila retorne
     "False" e ​​:meth:*~Queue.get_nowait* possa retornar sem levantar
     "queue.Empty".

  2. Se vários processos estiverem enfileirando objetos, é possível
     que os objetos sejam recebidos na outra extremidade fora de
     ordem. No entanto, objetos enfileirados pelo mesmo processo
     sempre estarão na ordem esperada em relação uns aos outros.

Aviso:

  Se um processo for morto usando "Process.terminate()" ou "os.kill()"
  enquanto estiver tentando usar uma "Queue", os dados na fila
  provavelmente serão corrompidos. Isso pode fazer com que qualquer
  outro processo obtenha uma exceção quando tentar usar a fila mais
  tarde.

Aviso:

  Conforme mencionado acima, se um processo filho tiver colocado itens
  em uma fila (e não tiver usado "JoinableQueue.cancel_join_thread"),
  esse processo não será encerrado até que todos os itens armazenados
  em buffer tenham sido liberados para o encadeamento.Isso significa
  que se você tentar juntar esse processo, poderá obter um impasse, a
  menos que tenha certeza de que todos os itens que foram colocados na
  fila foram consumidos. Da mesma forma, se o processo filho não for
  daemônico, o processo pai pode travar na saída quando tentar juntar
  todos os seus filhos não daemônicos.Note que uma fila criada usando
  um gerenciador não tem esse problema. Veja Programming guidelines.

Para um exemplo do uso de filas para comunicação entre processos, veja
Exemplos.

multiprocessing.Pipe([duplex])

   Retorna um par "(conn1, conn2)" de objetos "Connection"
   representando as extremidades de um encadeamento.

   Se *duplex* for "True" (o padrão), então o encadeamento é
   bidirecional. Se *duplex* for "False", então o encadeamento é
   unidirecional: "conn1" pode ser usado somente para receber
   mensagens e "conn2" pode ser usado somente para enviar mensagens.

class multiprocessing.Queue([maxsize])

   Retorna uma fila compartilhada de processo implementada usando um
   encadeamento e algumas travas/semáforos. Quando um processo coloca
   um item na fila pela primeira vez, uma thread alimentadora é
   iniciada, a qual transfere objetos de um buffer para o
   encadeamento.

   As exceções usuais "queue.Empty" e "queue.Full" do módulo "queue"
   da biblioteca padrão são levantadas para sinalizar tempos limite.

   "Queue" implementa todos os métodos de "queue.Queue", exceto
   "task_done()" e "join()".

   qsize()

      Retorna o tamanho aproximado da fila. Devido à semântica de
      multithreading/multiprocessamento, esse número não é confiável.

      Note that this may raise "NotImplementedError" on Unix platforms
      like Mac OS X where "sem_getvalue()" is not implemented.

   empty()

      Retorna "True" se a fila estiver vazia, "False" caso contrário.
      Devido à semântica de multithreading/multiprocessamento, isso
      não é confiável.

   full()

      Retorna "True" se a fila estiver cheia, "False" caso contrário.
      Devido à semântica de multithreading/multiprocessamento, isso
      não é confiável.

   put(obj[, block[, timeout]])

      Coloca *obj* na fila. Se o argumento opcional *block* for "True"
      (o padrão) e *timeout* for "None" (o padrão), bloqueia se
      necessário até que um slot livre esteja disponível. Se *timeout*
      for um número positivo, ele bloqueia no máximo *timeout*
      segundos e levanta a exceção "queue.Full" se nenhum slot livre
      estiver disponível dentro desse tempo. Caso contrário (*block* é
      "False"), coloca um item na fila se um slot livre estiver
      imediatamente disponível, senão levanta a exceção "queue.Full"
      (*timeout* é ignorado nesse caso).

      Alterado na versão 3.8: Se a fila for fechada, "ValueError" será
      levantada em vez de "AssertionError".

   put_nowait(obj)

      Equivalente  a "put(obj, False)".

   get([block[, timeout]])

      Remove e retorna um item da fila. Se os argumentos opcionais
      *block* forem "True" (o padrão) e *timeout* forem "None" (o
      padrão), bloqueia se necessário até que um item esteja
      disponível. Se *timeout* for um número positivo, ele bloqueia no
      máximo *timeout* segundos e levantada a exceção "queue.Empty" se
      nenhum item estiver disponível dentro desse tempo. Caso
      contrário (block for "False"), retorna um item se um estiver
      imediatamente disponível, senão levantada a exceção
      "queue.Empty" (*timeout* é ignorado nesse caso).

      Alterado na versão 3.8: Se a fila for fechada, "ValueError" será
      levantada em vez de "OSError".

   get_nowait()

      Equivalente a "get(False)".

   "multiprocessing.Queue" tem alguns métodos adicionais não
   encontrados em "queue.Queue". Esses métodos geralmente são
   desnecessários para a maioria dos códigos:

   close()

      Indica que nenhum dado adicional será colocado nesta fila pelo
      processo atual. O thread em segundo plano será encerrado assim
      que tiver descarregado todos os dados armazenados em buffer para
      o encadeamento. Isso é chamado automaticamente quando a fila é
      coletada como lixo.

   join_thread()

      Junta a thread de segundo plano. Isso só pode ser usado depois
      que "close()" for chamado. Isso bloqueia até que a thread de
      segundo plano saia, garantindo que todos os dados no buffer
      tenham sido liberados para o encadeamento.

      Por padrão, se um processo não for o criador da fila, ao sair,
      ele tentará se juntar ao thread de segundo plano da fila. O
      processo pode chamar "cancel_join_thread()" para fazer
      "join_thread()" não fazer nada.

   cancel_join_thread()

      Impede que "join_thread()" bloqueie. Em particular, isso impede
      que o thread de segundo plano seja unido automaticamente quando
      o processo sai -- veja "join_thread()".

      A better name for this method might be
      "allow_exit_without_flush()".  It is likely to cause enqueued
      data to lost, and you almost certainly will not need to use it.
      It is really only there if you need the current process to exit
      immediately without waiting to flush enqueued data to the
      underlying pipe, and you don't care about lost data.

   Nota:

     A funcionalidade desta classe requer uma implementação de
     semáforo compartilhado funcional no sistema operacional host. Sem
     uma, a funcionalidade nesta classe será desabilitada, e as
     tentativas de instanciar uma "Queue" resultarão em um
     "ImportError". Veja bpo-3770 para informações adicionais. O mesmo
     vale para qualquer um dos tipos de fila especializados listados
     abaixo.

class multiprocessing.SimpleQueue

   É um tipo "Queue" simplificado, muito próximo de um "Pipe" travado.

   empty()

      Retorna "True" se a fila estiver vazia, "False" caso contrário.

   get()

      Remove e retorna um item da fila.

   put(item)

      Coloca *item* na fila.

class multiprocessing.JoinableQueue([maxsize])

   "JoinableQueue", uma subclasse de "Queue", é uma fila que também
   possui os métodos "task_done()" e "join()".

   task_done()

      Indica que a tarefa anteriormente enfileirado está concluída.
      Para cada "get()" usado para buscar uma tarefa, uma chamada
      subsequente para "task_done()" avisa à fila, que o processamento
      na tarefa está concluído.

      Se um "join()" estiver causando bloqueio no momento, ele irá
      continuar quando todos os itens tiverem sido processados
      (significando que uma chamada "task_done()" foi recebida para
      cada item que foi chamado o método "put()" para colocar na
      fila).

      Levanta "ValueError" se for chamado mais vezes do que o número
      de itens colocados na fila.

   join()

      Bloqueia até que todos os itens na fila tenham sido obtidos e
      processados.

      A contagem de tarefas inacabadas aumenta sempre que um item é
      adicionado à fila. A contagem diminui sempre que uma consumidora
      chama "task_done()" para indicar que o item foi recuperado e
      todo o trabalho nele foi concluído. Quando a contagem de tarefas
      inacabadas chega a zero, "join()" desbloqueia.


Diversos
--------

multiprocessing.active_children()

   Retorna a lista de todos os filhos ativos do processo atual.

   Chamar isso tem o efeito colateral de "juntar" quaisquer processos
   que já tenham sido concluídos.

multiprocessing.cpu_count()

   Retorna o número de CPUs no sistema.

   Este número não é equivalente ao número de CPUs que o processo
   atual pode usar. O número de CPUs utilizáveis ​​pode ser obtido com
   "len(os.sched_getaffinity(0))"

   May raise "NotImplementedError".

   Ver também: "os.cpu_count()"

multiprocessing.current_process()

   Retorna o objeto "Process" correspondente ao processo atual.

   Um análogo de "threading.current_thread()".

multiprocessing.parent_process()

   Retorna o objeto "Process" correspondente ao processo pai do
   "current_process()". Para o processo principal, "parent_process"
   será "None".

   Novo na versão 3.8.

multiprocessing.freeze_support()

   Adiciona suporte para quando um programa que usa "multiprocessing"
   for congelado para produzir um executável do Windows. (Foi testado
   com **py2exe**, **PyInstaller** e **cx_Freeze**.)

   É preciso chamar esta função logo após a linha "if __name__ ==
   '__main__'" do módulo principal. Por exemplo:

      from multiprocessing import Process, freeze_support

      def f():
          print('hello world!')

      if __name__ == '__main__':
          freeze_support()
          Process(target=f).start()

   Se a linha "freeze_support()" for omitida, tentar executar o
   executável congelado levantará "RuntimeError".

   Chamar "freeze_support()" não tem efeito quando invocado em
   qualquer sistema operacional que não seja o Windows. Além disso, se
   o módulo estiver sendo executado normalmente pelo interpretador
   Python no Windows (o programa não foi congelado), então
   "freeze_support()" não tem efeito.

multiprocessing.get_all_start_methods()

   Returns a list of the supported start methods, the first of which
   is the default.  The possible start methods are "'fork'", "'spawn'"
   and "'forkserver'".  On Windows only "'spawn'" is available.  On
   Unix "'fork'" and "'spawn'" are always supported, with "'fork'"
   being the default.

   Novo na versão 3.4.

multiprocessing.get_context(method=None)

   Retorna um objeto de contexto que possui os mesmos atributos do
   módulo "multiprocessing".

   If *method* is "None" then the default context is returned.
   Otherwise *method* should be "'fork'", "'spawn'", "'forkserver'".
   "ValueError" is raised if the specified start method is not
   available.

   Novo na versão 3.4.

multiprocessing.get_start_method(allow_none=False)

   Retorna o nome do método de início usado para iniciar processos.

   Se o método de início não tiver sido corrigido e *allow_none* for
   falso, então o método de início será corrigido para o padrão e o
   nome será retornado. Se o método de início não tiver sido corrigido
   e *allow_none* for verdadeiro, então "None" será retornado.

   The return value can be "'fork'", "'spawn'", "'forkserver'" or
   "None".  "'fork'" is the default on Unix, while "'spawn'" is the
   default on Windows and macOS.

Alterado na versão 3.8: No macOS, o método de início *spawn* agora é o
padrão. O método de início *fork* deve ser considerado inseguro, pois
pode levar a travamentos do subprocesso. Veja bpo-33725.

Novo na versão 3.4.

multiprocessing.set_executable()

   Sets the path of the Python interpreter to use when starting a
   child process. (By default "sys.executable" is used).  Embedders
   will probably need to do some thing like

      set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))

   antes que eles possam criar processos filho.

   Alterado na versão 3.4: Now supported on Unix when the "'spawn'"
   start method is used.

multiprocessing.set_start_method(method)

   Set the method which should be used to start child processes.
   *method* can be "'fork'", "'spawn'" or "'forkserver'".

   Observe que isso deve ser chamado no máximo uma vez e deve ser
   protegido dentro da cláusula "if __name__ == '__main__'" do módulo
   principal.

   Novo na versão 3.4.

Nota:

  "multiprocessing" contém nenhum análogo de
  "threading.active_count()", "threading.enumerate()",
  "threading.settrace()", "threading.setprofile()", "threading.Timer"
  ou "threading.local".


Objetos de conexão
------------------

Objetos de conexão permitem o envio e recebimento de strings e objetos
que podem ser serializados com pickle. Eles podem ser pensados ​​como
sockets conectados orientados a mensagens.

Objetos de conexão geralmente são criados usando "Pipe" -- veja também
Listeners and Clients.

class multiprocessing.connection.Connection

   send(obj)

      Envia um objeto para a outra extremidade da conexão que deve ser
      lido usando "recv()".

      O objeto deve poder ser serializado com pickle. Serializações
      muito grandes com pickles (aproximadamente 32 MiB+, embora isso
      dependa do sistema operacional) podem levantar uma exceção
      "ValueError".

   recv()

      Retorna um objeto enviado a partir da outra extremidade da
      conexão usando "send()". Bloqueia até que haja algo para
      receber. Levanta "EOFError" se não houver mais nada para receber
      e a outra extremidade tenha sido fechada.

   fileno()

      Retorna o descritor de arquivo ou identificador usado pela
      conexão.

   close()

      Fecha a conexão.

      Isso é chamado automaticamente quando a conexão é coletada como
      lixo.

   poll([timeout])

      Retorna se há algum dado disponível para leitura.

      Se *timeout* não for especificado, ele retornará imediatamente.
      Se *timeout* for um número, isso especificará o tempo máximo em
      segundos para bloquear. Se *timeout* for "None", um tempo limite
      infinito será usado.

      Observe que vários objetos de conexão podem ser pesquisados ​​ao
      mesmo tempo usando "multiprocessing.connection.wait()".

   send_bytes(buffer[, offset[, size]])

      Enviar dados de bytes de um *objeto byte ou similar* como uma
      mensagem completa.

      Se *offset* for fornecido, os dados serão lidos daquela posição
      em *buffer*. Se *size* for fornecido, essa quantidade de bytes
      será lida do buffer. Buffers muito grandes (aproximadamente 32
      MiB+, embora isso dependa do sistema operacional) podem levantar
      uma exceção "ValueError"

   recv_bytes([maxlength])

      Retorna uma mensagem completa de dados como bytes enviados a
      partir da outra extremidade da conexão como uma string. Bloqueia
      até que haja algo para receber. Levanta "EOFError" se não houver
      mais nada para receber e a outra extremidade tenha sido fechada.

      Se *maxlength* for especificado e a mensagem for maior que
      *maxlength*, "OSError" será levantada e a conexão não será mais
      legível.

      Alterado na versão 3.3: Esta função costumava levantada
      "IOError", que agora é um apelido de "OSError".

   recv_bytes_into(buffer[, offset])

      Lê para *buffer* uma mensagem completa de dados como bytes
      enviados a partir da outra extremidade da conexão e retorna o
      número de bytes na mensagem. Bloqueia até que haja algo para
      receber. Levanta "EOFError" se não houver mais nada para receber
      e a outra extremidade tenha sido fechada.

      *buffer* deve ser um *objeto byte ou similar* gravável. Se
      *offset* for fornecido, a mensagem será escrita no buffer a
      partir dessa posição. A posição deve ser um inteiro não negativo
      menor que o comprimento de *buffer* (em bytes).

      Se o buffer for muito curto, uma exceção "BufferTooShort" será
      levantada e a mensagem completa estará disponível como
      "e.args[0]", onde "e" é a instância da exceção.

   Alterado na versão 3.3: Os próprios objetos de conexão agora podem
   ser transferidos entre processos usando "Connection.send()" e
   "Connection.recv()".

   Novo na versão 3.3: Connection objects now support the context
   management protocol -- see Tipos de Gerenciador de Contexto.
   "__enter__()" returns the connection object, and "__exit__()" calls
   "close()".

Por exemplo:

   >>> from multiprocessing import Pipe
   >>> a, b = Pipe()
   >>> a.send([1, 'hello', None])
   >>> b.recv()
   [1, 'hello', None]
   >>> b.send_bytes(b'thank you')
   >>> a.recv_bytes()
   b'thank you'
   >>> import array
   >>> arr1 = array.array('i', range(5))
   >>> arr2 = array.array('i', [0] * 10)
   >>> a.send_bytes(arr1)
   >>> count = b.recv_bytes_into(arr2)
   >>> assert count == len(arr1) * arr1.itemsize
   >>> arr2
   array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])

Aviso:

  O método "Connection.recv()" deserializa com picke automaticamente
  os dados recebidos, o que pode ser um risco à segurança, a menos que
  você possa confiar no processo que enviou a mensagem.Portanto, a
  menos que o objeto de conexão tenha sido produzido usando "Pipe()",
  você deve usar apenas os métodos "recv()" e "send()" após executar
  algum tipo de autenticação. Veja Authentication keys.

Aviso:

  Se um processo for encerrado enquanto estiver tentando ler ou
  escrever em um encadeamento, os dados no encadeameto provavelmente
  serão corrompidos, porque pode se tornar impossível ter certeza de
  onde estão os limites do encadeamento da mensagem.


Primitivas de sincronização
---------------------------

Geralmente, primitivas de sincronização não são tão necessárias em um
programa multiprocesso quanto em um programa multithread. Veja a
documentação do módulo "threading".

Observe que também é possível criar primitivas de sincronização usando
um objeto gerenciador — veja Gerenciadores.

class multiprocessing.Barrier(parties[, action[, timeout]])

   Um objeto de barreira: um clone de "threading.Barrier".

   Novo na versão 3.3.

class multiprocessing.BoundedSemaphore([value])

   Um objeto semáforo delimitado: um análogo próximo de
   "threading.BoundedSemaphore".

   Existe uma única diferença em relação ao seu análogo próximo: o
   primeiro argumento do método "acquire" é chamado *block*, como é
   consistente com "Lock.acquire()".

   Nota:

     On Mac OS X, this is indistinguishable from "Semaphore" because
     "sem_getvalue()" is not implemented on that platform.

class multiprocessing.Condition([lock])

   Uma variável de condição: um apelido para "threading.Condition".

   Se *lock* for especificado, ele deverá ser um objeto "Lock" ou
   "RLock" de "multiprocessing".

   Alterado na versão 3.3: O método "wait_for()" foi adicionado.

class multiprocessing.Event

   Um clone de "threading.Event".

class multiprocessing.Lock

   Um objeto de trava não recursivo: um análogo próximo de
   "threading.Lock". Uma vez que um processo ou thread tenha adquirido
   um trava, tentativas subsequentes de adquiri-la de qualquer
   processo ou thread bloquearão até que ele seja liberada; qualquer
   processo ou thread pode liberá-la. Os conceitos e comportamentos de
   "threading.Lock" conforme se aplica a threads são replicados aqui
   em "multiprocessing.Lock" conforme se aplica a processos ou
   threads, exceto conforme observado.

   Observe que "Lock" é na verdade uma função de fábrica que retorna
   uma instância de "multiprocessing.synchronize.Lock" inicializada
   com um contexto padrão.

   "Lock" oferece suporte ao protocolo *gerenciador de contexto* e,
   portanto, pode ser usado em instruções "with".

   acquire(block=True, timeout=None)

      Adquire uma trava, bloqueante ou não.

      Com o argumento *block* definido como "True" (o padrão), a
      chamada do método bloqueará até que a trava esteja em um estado
      destravado, então o definirá como travada e retornará "True".
      Observe que o nome deste primeiro argumento difere daquele em
      "threading.Lock.acquire()".

      Com o argumento *block* definido como "False", a chamada do
      método não bloqueia. Se a trava estiver atualmente em um estado
      travado, retorna "False"; caso contrário, defina a trava para um
      estado travado e retorna "True".

      Quando invocado com um valor positivo de ponto flutuante para
      *timeout*, bloqueie por no máximo o número de segundos
      especificado por *timeout* enquanto a trava não puder ser
      adquirida. Invocações com um valor negativo para *timeout* são
      equivalentes a um *timeout* de zero. Invocações com um valor
      *timeout* de "None" (o padrão) definem o período de tempo limite
      como infinito. Observe que o tratamento de valores negativos ou
      "None" para *timeout* difere do comportamento implementado em
      "threading.Lock.acquire()". O argumento *timeout* não tem
      implicações práticas se o argumento *block* for definido como
      "False" e, portanto, for ignorado. Retorna "True" se a trava
      tiver sido adquirida ou "False" se o período de tempo limite
      tiver decorrido.

   release()

      Libera uma trava. Isso pode ser chamado de qualquer processo ou
      thread, não apenas do processo ou thread que originalmente
      adquiriu a trava.

      Behavior is the same as in "threading.Lock.release()" except
      that when invoked on an unlocked lock, a "ValueError" is raised.

class multiprocessing.RLock

   A recursive lock object: a close analog of "threading.RLock".  A
   recursive lock must be released by the process or thread that
   acquired it. Once a process or thread has acquired a recursive
   lock, the same process or thread may acquire it again without
   blocking; that process or thread must release it once for each time
   it has been acquired.

   Note that "RLock" is actually a factory function which returns an
   instance of "multiprocessing.synchronize.RLock" initialized with a
   default context.

   "RLock" supports the *context manager* protocol and thus may be
   used in "with" statements.

   acquire(block=True, timeout=None)

      Adquire uma trava, bloqueante ou não.

      When invoked with the *block* argument set to "True", block
      until the lock is in an unlocked state (not owned by any process
      or thread) unless the lock is already owned by the current
      process or thread.  The current process or thread then takes
      ownership of the lock (if it does not already have ownership)
      and the recursion level inside the lock increments by one,
      resulting in a return value of "True".  Note that there are
      several differences in this first argument's behavior compared
      to the implementation of "threading.RLock.acquire()", starting
      with the name of the argument itself.

      When invoked with the *block* argument set to "False", do not
      block. If the lock has already been acquired (and thus is owned)
      by another process or thread, the current process or thread does
      not take ownership and the recursion level within the lock is
      not changed, resulting in a return value of "False".  If the
      lock is in an unlocked state, the current process or thread
      takes ownership and the recursion level is incremented,
      resulting in a return value of "True".

      Use and behaviors of the *timeout* argument are the same as in
      "Lock.acquire()".  Note that some of these behaviors of
      *timeout* differ from the implemented behaviors in
      "threading.RLock.acquire()".

   release()

      Release a lock, decrementing the recursion level.  If after the
      decrement the recursion level is zero, reset the lock to
      unlocked (not owned by any process or thread) and if any other
      processes or threads are blocked waiting for the lock to become
      unlocked, allow exactly one of them to proceed.  If after the
      decrement the recursion level is still nonzero, the lock remains
      locked and owned by the calling process or thread.

      Only call this method when the calling process or thread owns
      the lock. An "AssertionError" is raised if this method is called
      by a process or thread other than the owner or if the lock is in
      an unlocked (unowned) state.  Note that the type of exception
      raised in this situation differs from the implemented behavior
      in "threading.RLock.release()".

class multiprocessing.Semaphore([value])

   A semaphore object: a close analog of "threading.Semaphore".

   Existe uma única diferença em relação ao seu análogo próximo: o
   primeiro argumento do método "acquire" é chamado *block*, como é
   consistente com "Lock.acquire()".

Nota:

  On Mac OS X, "sem_timedwait" is unsupported, so calling "acquire()"
  with a timeout will emulate that function's behavior using a
  sleeping loop.

Nota:

  If the SIGINT signal generated by "Ctrl-C" arrives while the main
  thread is blocked by a call to "BoundedSemaphore.acquire()",
  "Lock.acquire()", "RLock.acquire()", "Semaphore.acquire()",
  "Condition.acquire()" or "Condition.wait()" then the call will be
  immediately interrupted and "KeyboardInterrupt" will be raised.This
  differs from the behaviour of "threading" where SIGINT will be
  ignored while the equivalent blocking calls are in progress.

Nota:

  Some of this package's functionality requires a functioning shared
  semaphore implementation on the host operating system. Without one,
  the "multiprocessing.synchronize" module will be disabled, and
  attempts to import it will result in an "ImportError". See bpo-3770
  for additional information.


Shared "ctypes" Objects
-----------------------

It is possible to create shared objects using shared memory which can
be inherited by child processes.

multiprocessing.Value(typecode_or_type, *args, lock=True)

   Return a "ctypes" object allocated from shared memory.  By default
   the return value is actually a synchronized wrapper for the object.
   The object itself can be accessed via the *value* attribute of a
   "Value".

   *typecode_or_type* determines the type of the returned object: it
   is either a ctypes type or a one character typecode of the kind
   used by the "array" module.  **args* is passed on to the
   constructor for the type.

   If *lock* is "True" (the default) then a new recursive lock object
   is created to synchronize access to the value.  If *lock* is a
   "Lock" or "RLock" object then that will be used to synchronize
   access to the value.  If *lock* is "False" then access to the
   returned object will not be automatically protected by a lock, so
   it will not necessarily be "process-safe".

   Operations like "+=" which involve a read and write are not atomic.
   So if, for instance, you want to atomically increment a shared
   value it is insufficient to just do

      counter.value += 1

   Assuming the associated lock is recursive (which it is by default)
   you can instead do

      with counter.get_lock():
          counter.value += 1

   Note that *lock* is a keyword-only argument.

multiprocessing.Array(typecode_or_type, size_or_initializer, *, lock=True)

   Return a ctypes array allocated from shared memory.  By default the
   return value is actually a synchronized wrapper for the array.

   *typecode_or_type* determines the type of the elements of the
   returned array: it is either a ctypes type or a one character
   typecode of the kind used by the "array" module.  If
   *size_or_initializer* is an integer, then it determines the length
   of the array, and the array will be initially zeroed. Otherwise,
   *size_or_initializer* is a sequence which is used to initialize the
   array and whose length determines the length of the array.

   If *lock* is "True" (the default) then a new lock object is created
   to synchronize access to the value.  If *lock* is a "Lock" or
   "RLock" object then that will be used to synchronize access to the
   value.  If *lock* is "False" then access to the returned object
   will not be automatically protected by a lock, so it will not
   necessarily be "process-safe".

   Note that *lock* is a keyword only argument.

   Note that an array of "ctypes.c_char" has *value* and *raw*
   attributes which allow one to use it to store and retrieve strings.


The "multiprocessing.sharedctypes" module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The "multiprocessing.sharedctypes" module provides functions for
allocating "ctypes" objects from shared memory which can be inherited
by child processes.

Nota:

  Although it is possible to store a pointer in shared memory remember
  that this will refer to a location in the address space of a
  specific process. However, the pointer is quite likely to be invalid
  in the context of a second process and trying to dereference the
  pointer from the second process may cause a crash.

multiprocessing.sharedctypes.RawArray(typecode_or_type, size_or_initializer)

   Return a ctypes array allocated from shared memory.

   *typecode_or_type* determines the type of the elements of the
   returned array: it is either a ctypes type or a one character
   typecode of the kind used by the "array" module.  If
   *size_or_initializer* is an integer then it determines the length
   of the array, and the array will be initially zeroed. Otherwise
   *size_or_initializer* is a sequence which is used to initialize the
   array and whose length determines the length of the array.

   Note that setting and getting an element is potentially non-atomic
   -- use "Array()" instead to make sure that access is automatically
   synchronized using a lock.

multiprocessing.sharedctypes.RawValue(typecode_or_type, *args)

   Return a ctypes object allocated from shared memory.

   *typecode_or_type* determines the type of the returned object: it
   is either a ctypes type or a one character typecode of the kind
   used by the "array" module.  **args* is passed on to the
   constructor for the type.

   Note that setting and getting the value is potentially non-atomic
   -- use "Value()" instead to make sure that access is automatically
   synchronized using a lock.

   Note that an array of "ctypes.c_char" has "value" and "raw"
   attributes which allow one to use it to store and retrieve strings
   -- see documentation for "ctypes".

multiprocessing.sharedctypes.Array(typecode_or_type, size_or_initializer, *, lock=True)

   The same as "RawArray()" except that depending on the value of
   *lock* a process-safe synchronization wrapper may be returned
   instead of a raw ctypes array.

   If *lock* is "True" (the default) then a new lock object is created
   to synchronize access to the value.  If *lock* is a "Lock" or
   "RLock" object then that will be used to synchronize access to the
   value.  If *lock* is "False" then access to the returned object
   will not be automatically protected by a lock, so it will not
   necessarily be "process-safe".

   Note that *lock* is a keyword-only argument.

multiprocessing.sharedctypes.Value(typecode_or_type, *args, lock=True)

   The same as "RawValue()" except that depending on the value of
   *lock* a process-safe synchronization wrapper may be returned
   instead of a raw ctypes object.

   If *lock* is "True" (the default) then a new lock object is created
   to synchronize access to the value.  If *lock* is a "Lock" or
   "RLock" object then that will be used to synchronize access to the
   value.  If *lock* is "False" then access to the returned object
   will not be automatically protected by a lock, so it will not
   necessarily be "process-safe".

   Note that *lock* is a keyword-only argument.

multiprocessing.sharedctypes.copy(obj)

   Return a ctypes object allocated from shared memory which is a copy
   of the ctypes object *obj*.

multiprocessing.sharedctypes.synchronized(obj[, lock])

   Return a process-safe wrapper object for a ctypes object which uses
   *lock* to synchronize access.  If *lock* is "None" (the default)
   then a "multiprocessing.RLock" object is created automatically.

   A synchronized wrapper will have two methods in addition to those
   of the object it wraps: "get_obj()" returns the wrapped object and
   "get_lock()" returns the lock object used for synchronization.

   Note that accessing the ctypes object through the wrapper can be a
   lot slower than accessing the raw ctypes object.

   Alterado na versão 3.5: Synchronized objects support the *context
   manager* protocol.

The table below compares the syntax for creating shared ctypes objects
from shared memory with the normal ctypes syntax.  (In the table
"MyStruct" is some subclass of "ctypes.Structure".)

+----------------------+----------------------------+-----------------------------+
| ctypes               | sharedctypes using type    | sharedctypes using typecode |
|======================|============================|=============================|
| c_double(2.4)        | RawValue(c_double, 2.4)    | RawValue('d', 2.4)          |
+----------------------+----------------------------+-----------------------------+
| MyStruct(4, 6)       | RawValue(MyStruct, 4, 6)   |                             |
+----------------------+----------------------------+-----------------------------+
| (c_short * 7)()      | RawArray(c_short, 7)       | RawArray('h', 7)            |
+----------------------+----------------------------+-----------------------------+
| (c_int * 3)(9, 2, 8) | RawArray(c_int, (9, 2, 8)) | RawArray('i', (9, 2, 8))    |
+----------------------+----------------------------+-----------------------------+

Below is an example where a number of ctypes objects are modified by a
child process:

   from multiprocessing import Process, Lock
   from multiprocessing.sharedctypes import Value, Array
   from ctypes import Structure, c_double

   class Point(Structure):
       _fields_ = [('x', c_double), ('y', c_double)]

   def modify(n, x, s, A):
       n.value **= 2
       x.value **= 2
       s.value = s.value.upper()
       for a in A:
           a.x **= 2
           a.y **= 2

   if __name__ == '__main__':
       lock = Lock()

       n = Value('i', 7)
       x = Value(c_double, 1.0/3.0, lock=False)
       s = Array('c', b'hello world', lock=lock)
       A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)

       p = Process(target=modify, args=(n, x, s, A))
       p.start()
       p.join()

       print(n.value)
       print(x.value)
       print(s.value)
       print([(a.x, a.y) for a in A])

The results printed are

   49
   0.1111111111111111
   HELLO WORLD
   [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]


Gerenciadores
-------------

Managers provide a way to create data which can be shared between
different processes, including sharing over a network between
processes running on different machines. A manager object controls a
server process which manages *shared objects*.  Other processes can
access the shared objects by using proxies.

multiprocessing.Manager()

   Returns a started "SyncManager" object which can be used for
   sharing objects between processes.  The returned manager object
   corresponds to a spawned child process and has methods which will
   create shared objects and return corresponding proxies.

Manager processes will be shutdown as soon as they are garbage
collected or their parent process exits.  The manager classes are
defined in the "multiprocessing.managers" module:

class multiprocessing.managers.BaseManager([address[, authkey]])

   Criando um objeto BaseManager.

   Once created one should call "start()" or
   "get_server().serve_forever()" to ensure that the manager object
   refers to a started manager process.

   *address* is the address on which the manager process listens for
   new connections.  If *address* is "None" then an arbitrary one is
   chosen.

   *authkey* is the authentication key which will be used to check the
   validity of incoming connections to the server process.  If
   *authkey* is "None" then "current_process().authkey" is used.
   Otherwise *authkey* is used and it must be a byte string.

   start([initializer[, initargs]])

      Start a subprocess to start the manager.  If *initializer* is
      not "None" then the subprocess will call
      "initializer(*initargs)" when it starts.

   get_server()

      Returns a "Server" object which represents the actual server
      under the control of the Manager. The "Server" object supports
      the "serve_forever()" method:

         >>> from multiprocessing.managers import BaseManager
         >>> manager = BaseManager(address=('', 50000), authkey=b'abc')
         >>> server = manager.get_server()
         >>> server.serve_forever()

      "Server" additionally has an "address" attribute.

   connect()

      Connect a local manager object to a remote manager process:

         >>> from multiprocessing.managers import BaseManager
         >>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc')
         >>> m.connect()

   shutdown()

      Stop the process used by the manager.  This is only available if
      "start()" has been used to start the server process.

      This can be called multiple times.

   register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])

      A classmethod which can be used for registering a type or
      callable with the manager class.

      *typeid* is a "type identifier" which is used to identify a
      particular type of shared object.  This must be a string.

      *callable* is a callable used for creating objects for this type
      identifier.  If a manager instance will be connected to the
      server using the "connect()" method, or if the *create_method*
      argument is "False" then this can be left as "None".

      *proxytype* is a subclass of "BaseProxy" which is used to create
      proxies for shared objects with this *typeid*.  If "None" then a
      proxy class is created automatically.

      *exposed* is used to specify a sequence of method names which
      proxies for this typeid should be allowed to access using
      "BaseProxy._callmethod()".  (If *exposed* is "None" then
      "proxytype._exposed_" is used instead if it exists.)  In the
      case where no exposed list is specified, all "public methods" of
      the shared object will be accessible.  (Here a "public method"
      means any attribute which has a "__call__()" method and whose
      name does not begin with "'_'".)

      *method_to_typeid* is a mapping used to specify the return type
      of those exposed methods which should return a proxy.  It maps
      method names to typeid strings.  (If *method_to_typeid* is
      "None" then "proxytype._method_to_typeid_" is used instead if it
      exists.)  If a method's name is not a key of this mapping or if
      the mapping is "None" then the object returned by the method
      will be copied by value.

      *create_method* determines whether a method should be created
      with name *typeid* which can be used to tell the server process
      to create a new shared object and return a proxy for it.  By
      default it is "True".

   "BaseManager" instances also have one read-only property:

   address

      The address used by the manager.

   Alterado na versão 3.3: Manager objects support the context
   management protocol -- see Tipos de Gerenciador de Contexto.
   "__enter__()" starts the server process (if it has not already
   started) and then returns the manager object.  "__exit__()" calls
   "shutdown()".In previous versions "__enter__()" did not start the
   manager's server process if it was not already started.

class multiprocessing.managers.SyncManager

   A subclass of "BaseManager" which can be used for the
   synchronization of processes.  Objects of this type are returned by
   "multiprocessing.Manager()".

   Its methods create and return Proxy Objects for a number of
   commonly used data types to be synchronized across processes. This
   notably includes shared lists and dictionaries.

   Barrier(parties[, action[, timeout]])

      Create a shared "threading.Barrier" object and return a proxy
      for it.

      Novo na versão 3.3.

   BoundedSemaphore([value])

      Create a shared "threading.BoundedSemaphore" object and return a
      proxy for it.

   Condition([lock])

      Create a shared "threading.Condition" object and return a proxy
      for it.

      If *lock* is supplied then it should be a proxy for a
      "threading.Lock" or "threading.RLock" object.

      Alterado na versão 3.3: O método "wait_for()" foi adicionado.

   Event()

      Create a shared "threading.Event" object and return a proxy for
      it.

   Lock()

      Create a shared "threading.Lock" object and return a proxy for
      it.

   Namespace()

      Create a shared "Namespace" object and return a proxy for it.

   Queue([maxsize])

      Create a shared "queue.Queue" object and return a proxy for it.

   RLock()

      Create a shared "threading.RLock" object and return a proxy for
      it.

   Semaphore([value])

      Create a shared "threading.Semaphore" object and return a proxy
      for it.

   Array(typecode, sequence)

      Create an array and return a proxy for it.

   Value(typecode, value)

      Create an object with a writable "value" attribute and return a
      proxy for it.

   dict()
   dict(mapping)
   dict(sequence)

      Create a shared "dict" object and return a proxy for it.

   list()
   list(sequence)

      Create a shared "list" object and return a proxy for it.

   Alterado na versão 3.6: Shared objects are capable of being nested.
   For example, a shared container object such as a shared list can
   contain other shared objects which will all be managed and
   synchronized by the "SyncManager".

class multiprocessing.managers.Namespace

   A type that can register with "SyncManager".

   A namespace object has no public methods, but does have writable
   attributes. Its representation shows the values of its attributes.

   However, when using a proxy for a namespace object, an attribute
   beginning with "'_'" will be an attribute of the proxy and not an
   attribute of the referent:

      >>> manager = multiprocessing.Manager()
      >>> Global = manager.Namespace()
      >>> Global.x = 10
      >>> Global.y = 'hello'
      >>> Global._z = 12.3    # this is an attribute of the proxy
      >>> print(Global)
      Namespace(x=10, y='hello')


Customized managers
~~~~~~~~~~~~~~~~~~~

To create one's own manager, one creates a subclass of "BaseManager"
and uses the "register()" classmethod to register new types or
callables with the manager class.  For example:

   from multiprocessing.managers import BaseManager

   class MathsClass:
       def add(self, x, y):
           return x + y
       def mul(self, x, y):
           return x * y

   class MyManager(BaseManager):
       pass

   MyManager.register('Maths', MathsClass)

   if __name__ == '__main__':
       with MyManager() as manager:
           maths = manager.Maths()
           print(maths.add(4, 3))         # prints 7
           print(maths.mul(7, 8))         # prints 56


Using a remote manager
~~~~~~~~~~~~~~~~~~~~~~

It is possible to run a manager server on one machine and have clients
use it from other machines (assuming that the firewalls involved allow
it).

Running the following commands creates a server for a single shared
queue which remote clients can access:

   >>> from multiprocessing.managers import BaseManager
   >>> from queue import Queue
   >>> queue = Queue()
   >>> class QueueManager(BaseManager): pass
   >>> QueueManager.register('get_queue', callable=lambda:queue)
   >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
   >>> s = m.get_server()
   >>> s.serve_forever()

One client can access the server as follows:

   >>> from multiprocessing.managers import BaseManager
   >>> class QueueManager(BaseManager): pass
   >>> QueueManager.register('get_queue')
   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
   >>> m.connect()
   >>> queue = m.get_queue()
   >>> queue.put('hello')

Another client can also use it:

   >>> from multiprocessing.managers import BaseManager
   >>> class QueueManager(BaseManager): pass
   >>> QueueManager.register('get_queue')
   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
   >>> m.connect()
   >>> queue = m.get_queue()
   >>> queue.get()
   'hello'

Local processes can also access that queue, using the code from above
on the client to access it remotely:

   >>> from multiprocessing import Process, Queue
   >>> from multiprocessing.managers import BaseManager
   >>> class Worker(Process):
   ...     def __init__(self, q):
   ...         self.q = q
   ...         super().__init__()
   ...     def run(self):
   ...         self.q.put('local hello')
   ...
   >>> queue = Queue()
   >>> w = Worker(queue)
   >>> w.start()
   >>> class QueueManager(BaseManager): pass
   ...
   >>> QueueManager.register('get_queue', callable=lambda: queue)
   >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
   >>> s = m.get_server()
   >>> s.serve_forever()


Proxy Objects
-------------

A proxy is an object which *refers* to a shared object which lives
(presumably) in a different process.  The shared object is said to be
the *referent* of the proxy.  Multiple proxy objects may have the same
referent.

A proxy object has methods which invoke corresponding methods of its
referent (although not every method of the referent will necessarily
be available through the proxy).  In this way, a proxy can be used
just like its referent can:

   >>> from multiprocessing import Manager
   >>> manager = Manager()
   >>> l = manager.list([i*i for i in range(10)])
   >>> print(l)
   [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
   >>> print(repr(l))
   <ListProxy object, typeid 'list' at 0x...>
   >>> l[4]
   16
   >>> l[2:5]
   [4, 9, 16]

Notice that applying "str()" to a proxy will return the representation
of the referent, whereas applying "repr()" will return the
representation of the proxy.

An important feature of proxy objects is that they are picklable so
they can be passed between processes.  As such, a referent can contain
Proxy Objects.  This permits nesting of these managed lists, dicts,
and other Proxy Objects:

   >>> a = manager.list()
   >>> b = manager.list()
   >>> a.append(b)         # referent of a now contains referent of b
   >>> print(a, b)
   [<ListProxy object, typeid 'list' at ...>] []
   >>> b.append('hello')
   >>> print(a[0], b)
   ['hello'] ['hello']

Similarly, dict and list proxies may be nested inside one another:

   >>> l_outer = manager.list([ manager.dict() for i in range(2) ])
   >>> d_first_inner = l_outer[0]
   >>> d_first_inner['a'] = 1
   >>> d_first_inner['b'] = 2
   >>> l_outer[1]['c'] = 3
   >>> l_outer[1]['z'] = 26
   >>> print(l_outer[0])
   {'a': 1, 'b': 2}
   >>> print(l_outer[1])
   {'c': 3, 'z': 26}

If standard (non-proxy) "list" or "dict" objects are contained in a
referent, modifications to those mutable values will not be propagated
through the manager because the proxy has no way of knowing when the
values contained within are modified.  However, storing a value in a
container proxy (which triggers a "__setitem__" on the proxy object)
does propagate through the manager and so to effectively modify such
an item, one could re-assign the modified value to the container
proxy:

   # create a list proxy and append a mutable object (a dictionary)
   lproxy = manager.list()
   lproxy.append({})
   # now mutate the dictionary
   d = lproxy[0]
   d['a'] = 1
   d['b'] = 2
   # at this point, the changes to d are not yet synced, but by
   # updating the dictionary, the proxy is notified of the change
   lproxy[0] = d

This approach is perhaps less convenient than employing nested Proxy
Objects for most use cases but also demonstrates a level of control
over the synchronization.

Nota:

  The proxy types in "multiprocessing" do nothing to support
  comparisons by value.  So, for instance, we have:

     >>> manager.list([1,2,3]) == [1,2,3]
     False

  One should just use a copy of the referent instead when making
  comparisons.

class multiprocessing.managers.BaseProxy

   Proxy objects are instances of subclasses of "BaseProxy".

   _callmethod(methodname[, args[, kwds]])

      Call and return the result of a method of the proxy's referent.

      If "proxy" is a proxy whose referent is "obj" then the
      expression

         proxy._callmethod(methodname, args, kwds)

      will evaluate the expression

         getattr(obj, methodname)(*args, **kwds)

      in the manager's process.

      The returned value will be a copy of the result of the call or a
      proxy to a new shared object -- see documentation for the
      *method_to_typeid* argument of "BaseManager.register()".

      If an exception is raised by the call, then is re-raised by
      "_callmethod()".  If some other exception is raised in the
      manager's process then this is converted into a "RemoteError"
      exception and is raised by "_callmethod()".

      Note in particular that an exception will be raised if
      *methodname* has not been *exposed*.

      An example of the usage of "_callmethod()":

         >>> l = manager.list(range(10))
         >>> l._callmethod('__len__')
         10
         >>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
         [2, 3, 4, 5, 6]
         >>> l._callmethod('__getitem__', (20,))          # equivalent to l[20]
         Traceback (most recent call last):
         ...
         IndexError: list index out of range

   _getvalue()

      Return a copy of the referent.

      If the referent is unpicklable then this will raise an
      exception.

   __repr__()

      Return a representation of the proxy object.

   __str__()

      Return the representation of the referent.


Limpeza
~~~~~~~

A proxy object uses a weakref callback so that when it gets garbage
collected it deregisters itself from the manager which owns its
referent.

A shared object gets deleted from the manager process when there are
no longer any proxies referring to it.


Process Pools
-------------

One can create a pool of processes which will carry out tasks
submitted to it with the "Pool" class.

class multiprocessing.pool.Pool([processes[, initializer[, initargs[, maxtasksperchild[, context]]]]])

   A process pool object which controls a pool of worker processes to
   which jobs can be submitted.  It supports asynchronous results with
   timeouts and callbacks and has a parallel map implementation.

   *processes* is the number of worker processes to use.  If
   *processes* is "None" then the number returned by "os.cpu_count()"
   is used.

   If *initializer* is not "None" then each worker process will call
   "initializer(*initargs)" when it starts.

   *maxtasksperchild* is the number of tasks a worker process can
   complete before it will exit and be replaced with a fresh worker
   process, to enable unused resources to be freed. The default
   *maxtasksperchild* is "None", which means worker processes will
   live as long as the pool.

   *context* can be used to specify the context used for starting the
   worker processes.  Usually a pool is created using the function
   "multiprocessing.Pool()" or the "Pool()" method of a context
   object.  In both cases *context* is set appropriately.

   Note that the methods of the pool object should only be called by
   the process which created the pool.

   Aviso:

     "multiprocessing.pool" objects have internal resources that need
     to be properly managed (like any other resource) by using the
     pool as a context manager or by calling "close()" and
     "terminate()" manually. Failure to do this can lead to the
     process hanging on finalization.Note that is **not correct** to
     rely on the garbage colletor to destroy the pool as CPython does
     not assure that the finalizer of the pool will be called (see
     "object.__del__()" for more information).

   Novo na versão 3.2: *maxtasksperchild*

   Novo na versão 3.4: *context*

   Nota:

     Worker processes within a "Pool" typically live for the complete
     duration of the Pool's work queue. A frequent pattern found in
     other systems (such as Apache, mod_wsgi, etc) to free resources
     held by workers is to allow a worker within a pool to complete
     only a set amount of work before being exiting, being cleaned up
     and a new process spawned to replace the old one. The
     *maxtasksperchild* argument to the "Pool" exposes this ability to
     the end user.

   apply(func[, args[, kwds]])

      Call *func* with arguments *args* and keyword arguments *kwds*.
      It blocks until the result is ready. Given this blocks,
      "apply_async()" is better suited for performing work in
      parallel. Additionally, *func* is only executed in one of the
      workers of the pool.

   apply_async(func[, args[, kwds[, callback[, error_callback]]]])

      A variant of the "apply()" method which returns a "AsyncResult"
      object.

      If *callback* is specified then it should be a callable which
      accepts a single argument.  When the result becomes ready
      *callback* is applied to it, that is unless the call failed, in
      which case the *error_callback* is applied instead.

      If *error_callback* is specified then it should be a callable
      which accepts a single argument.  If the target function fails,
      then the *error_callback* is called with the exception instance.

      Callbacks should complete immediately since otherwise the thread
      which handles the results will get blocked.

   map(func, iterable[, chunksize])

      A parallel equivalent of the "map()" built-in function (it
      supports only one *iterable* argument though, for multiple
      iterables see "starmap()"). It blocks until the result is ready.

      This method chops the iterable into a number of chunks which it
      submits to the process pool as separate tasks.  The
      (approximate) size of these chunks can be specified by setting
      *chunksize* to a positive integer.

      Note that it may cause high memory usage for very long
      iterables. Consider using "imap()" or "imap_unordered()" with
      explicit *chunksize* option for better efficiency.

   map_async(func, iterable[, chunksize[, callback[, error_callback]]])

      A variant of the "map()" method which returns a "AsyncResult"
      object.

      If *callback* is specified then it should be a callable which
      accepts a single argument.  When the result becomes ready
      *callback* is applied to it, that is unless the call failed, in
      which case the *error_callback* is applied instead.

      If *error_callback* is specified then it should be a callable
      which accepts a single argument.  If the target function fails,
      then the *error_callback* is called with the exception instance.

      Callbacks should complete immediately since otherwise the thread
      which handles the results will get blocked.

   imap(func, iterable[, chunksize])

      A lazier version of "map()".

      The *chunksize* argument is the same as the one used by the
      "map()" method.  For very long iterables using a large value for
      *chunksize* can make the job complete **much** faster than using
      the default value of "1".

      Also if *chunksize* is "1" then the "next()" method of the
      iterator returned by the "imap()" method has an optional
      *timeout* parameter: "next(timeout)" will raise
      "multiprocessing.TimeoutError" if the result cannot be returned
      within *timeout* seconds.

   imap_unordered(func, iterable[, chunksize])

      The same as "imap()" except that the ordering of the results
      from the returned iterator should be considered arbitrary.
      (Only when there is only one worker process is the order
      guaranteed to be "correct".)

   starmap(func, iterable[, chunksize])

      Like "map()" except that the elements of the *iterable* are
      expected to be iterables that are unpacked as arguments.

      Hence an *iterable* of "[(1,2), (3, 4)]" results in "[func(1,2),
      func(3,4)]".

      Novo na versão 3.3.

   starmap_async(func, iterable[, chunksize[, callback[, error_callback]]])

      A combination of "starmap()" and "map_async()" that iterates
      over *iterable* of iterables and calls *func* with the iterables
      unpacked. Returns a result object.

      Novo na versão 3.3.

   close()

      Prevents any more tasks from being submitted to the pool.  Once
      all the tasks have been completed the worker processes will
      exit.

   terminate()

      Stops the worker processes immediately without completing
      outstanding work.  When the pool object is garbage collected
      "terminate()" will be called immediately.

   join()

      Wait for the worker processes to exit.  One must call "close()"
      or "terminate()" before using "join()".

   Novo na versão 3.3: Pool objects now support the context management
   protocol -- see Tipos de Gerenciador de Contexto.  "__enter__()"
   returns the pool object, and "__exit__()" calls "terminate()".

class multiprocessing.pool.AsyncResult

   The class of the result returned by "Pool.apply_async()" and
   "Pool.map_async()".

   get([timeout])

      Return the result when it arrives.  If *timeout* is not "None"
      and the result does not arrive within *timeout* seconds then
      "multiprocessing.TimeoutError" is raised.  If the remote call
      raised an exception then that exception will be reraised by
      "get()".

   wait([timeout])

      Wait until the result is available or until *timeout* seconds
      pass.

   ready()

      Return whether the call has completed.

   successful()

      Return whether the call completed without raising an exception.
      Will raise "ValueError" if the result is not ready.

      Alterado na versão 3.7: If the result is not ready, "ValueError"
      is raised instead of "AssertionError".

The following example demonstrates the use of a pool:

   from multiprocessing import Pool
   import time

   def f(x):
       return x*x

   if __name__ == '__main__':
       with Pool(processes=4) as pool:         # start 4 worker processes
           result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
           print(result.get(timeout=1))        # prints "100" unless your computer is *very* slow

           print(pool.map(f, range(10)))       # prints "[0, 1, 4,..., 81]"

           it = pool.imap(f, range(10))
           print(next(it))                     # prints "0"
           print(next(it))                     # prints "1"
           print(it.next(timeout=1))           # prints "4" unless your computer is *very* slow

           result = pool.apply_async(time.sleep, (10,))
           print(result.get(timeout=1))        # raises multiprocessing.TimeoutError


Listeners and Clients
---------------------

Usually message passing between processes is done using queues or by
using "Connection" objects returned by "Pipe()".

However, the "multiprocessing.connection" module allows some extra
flexibility.  It basically gives a high level message oriented API for
dealing with sockets or Windows named pipes.  It also has support for
*digest authentication* using the "hmac" module, and for polling
multiple connections at the same time.

multiprocessing.connection.deliver_challenge(connection, authkey)

   Send a randomly generated message to the other end of the
   connection and wait for a reply.

   If the reply matches the digest of the message using *authkey* as
   the key then a welcome message is sent to the other end of the
   connection.  Otherwise "AuthenticationError" is raised.

multiprocessing.connection.answer_challenge(connection, authkey)

   Receive a message, calculate the digest of the message using
   *authkey* as the key, and then send the digest back.

   If a welcome message is not received, then "AuthenticationError" is
   raised.

multiprocessing.connection.Client(address[, family[, authkey]])

   Attempt to set up a connection to the listener which is using
   address *address*, returning a "Connection".

   The type of the connection is determined by *family* argument, but
   this can generally be omitted since it can usually be inferred from
   the format of *address*. (See Formatos de Endereços)

   If *authkey* is given and not None, it should be a byte string and
   will be used as the secret key for an HMAC-based authentication
   challenge. No authentication is done if *authkey* is None.
   "AuthenticationError" is raised if authentication fails. See
   Authentication keys.

class multiprocessing.connection.Listener([address[, family[, backlog[, authkey]]]])

   A wrapper for a bound socket or Windows named pipe which is
   'listening' for connections.

   *address* is the address to be used by the bound socket or named
   pipe of the listener object.

   Nota:

     If an address of '0.0.0.0' is used, the address will not be a
     connectable end point on Windows. If you require a connectable
     end-point, you should use '127.0.0.1'.

   *family* is the type of socket (or named pipe) to use.  This can be
   one of the strings "'AF_INET'" (for a TCP socket), "'AF_UNIX'" (for
   a Unix domain socket) or "'AF_PIPE'" (for a Windows named pipe).
   Of these only the first is guaranteed to be available.  If *family*
   is "None" then the family is inferred from the format of *address*.
   If *address* is also "None" then a default is chosen.  This default
   is the family which is assumed to be the fastest available.  See
   Formatos de Endereços.  Note that if *family* is "'AF_UNIX'" and
   address is "None" then the socket will be created in a private
   temporary directory created using "tempfile.mkstemp()".

   If the listener object uses a socket then *backlog* (1 by default)
   is passed to the "listen()" method of the socket once it has been
   bound.

   If *authkey* is given and not None, it should be a byte string and
   will be used as the secret key for an HMAC-based authentication
   challenge. No authentication is done if *authkey* is None.
   "AuthenticationError" is raised if authentication fails. See
   Authentication keys.

   accept()

      Accept a connection on the bound socket or named pipe of the
      listener object and return a "Connection" object. If
      authentication is attempted and fails, then
      "AuthenticationError" is raised.

   close()

      Close the bound socket or named pipe of the listener object.
      This is called automatically when the listener is garbage
      collected.  However it is advisable to call it explicitly.

   Listener objects have the following read-only properties:

   address

      The address which is being used by the Listener object.

   last_accepted

      The address from which the last accepted connection came.  If
      this is unavailable then it is "None".

   Novo na versão 3.3: Listener objects now support the context
   management protocol -- see Tipos de Gerenciador de Contexto.
   "__enter__()" returns the listener object, and "__exit__()" calls
   "close()".

multiprocessing.connection.wait(object_list, timeout=None)

   Wait till an object in *object_list* is ready.  Returns the list of
   those objects in *object_list* which are ready.  If *timeout* is a
   float then the call blocks for at most that many seconds.  If
   *timeout* is "None" then it will block for an unlimited period. A
   negative timeout is equivalent to a zero timeout.

   For both Unix and Windows, an object can appear in *object_list* if
   it is

   * a readable "Connection" object;

   * a connected and readable "socket.socket" object; or

   * the "sentinel" attribute of a "Process" object.

   A connection or socket object is ready when there is data available
   to be read from it, or the other end has been closed.

   **Unix**: "wait(object_list, timeout)" almost equivalent
   "select.select(object_list, [], [], timeout)".  The difference is
   that, if "select.select()" is interrupted by a signal, it can raise
   "OSError" with an error number of "EINTR", whereas "wait()" will
   not.

   **Windows**: An item in *object_list* must either be an integer
   handle which is waitable (according to the definition used by the
   documentation of the Win32 function "WaitForMultipleObjects()") or
   it can be an object with a "fileno()" method which returns a socket
   handle or pipe handle.  (Note that pipe handles and socket handles
   are **not** waitable handles.)

   Novo na versão 3.3.

**Examples**

The following server code creates a listener which uses "'secret
password'" as an authentication key.  It then waits for a connection
and sends some data to the client:

   from multiprocessing.connection import Listener
   from array import array

   address = ('localhost', 6000)     # family is deduced to be 'AF_INET'

   with Listener(address, authkey=b'secret password') as listener:
       with listener.accept() as conn:
           print('connection accepted from', listener.last_accepted)

           conn.send([2.25, None, 'junk', float])

           conn.send_bytes(b'hello')

           conn.send_bytes(array('i', [42, 1729]))

The following code connects to the server and receives some data from
the server:

   from multiprocessing.connection import Client
   from array import array

   address = ('localhost', 6000)

   with Client(address, authkey=b'secret password') as conn:
       print(conn.recv())                  # => [2.25, None, 'junk', float]

       print(conn.recv_bytes())            # => 'hello'

       arr = array('i', [0, 0, 0, 0, 0])
       print(conn.recv_bytes_into(arr))    # => 8
       print(arr)                          # => array('i', [42, 1729, 0, 0, 0])

The following code uses "wait()" to wait for messages from multiple
processes at once:

   import time, random
   from multiprocessing import Process, Pipe, current_process
   from multiprocessing.connection import wait

   def foo(w):
       for i in range(10):
           w.send((i, current_process().name))
       w.close()

   if __name__ == '__main__':
       readers = []

       for i in range(4):
           r, w = Pipe(duplex=False)
           readers.append(r)
           p = Process(target=foo, args=(w,))
           p.start()
           # We close the writable end of the pipe now to be sure that
           # p is the only process which owns a handle for it.  This
           # ensures that when p closes its handle for the writable end,
           # wait() will promptly report the readable end as being ready.
           w.close()

       while readers:
           for r in wait(readers):
               try:
                   msg = r.recv()
               except EOFError:
                   readers.remove(r)
               else:
                   print(msg)


Formatos de Endereços
~~~~~~~~~~~~~~~~~~~~~

* Um endereço "'AF_INET'" é uma tupla na forma de "(hostname, port)"
  sendo *hostname* uma string e *port* um inteiro.

* An "'AF_UNIX'" address is a string representing a filename on the
  filesystem.

* An "'AF_PIPE'" address is a string of the form
  "r'\.\pipe{PipeName}'".  To use "Client()" to connect to a named
  pipe on a remote computer called *ServerName* one should use an
  address of the form "r'\*ServerName*\pipe{PipeName}'" instead.

Note that any string beginning with two backslashes is assumed by
default to be an "'AF_PIPE'" address rather than an "'AF_UNIX'"
address.


Authentication keys
-------------------

When one uses "Connection.recv", the data received is automatically
unpickled. Unfortunately unpickling data from an untrusted source is a
security risk. Therefore "Listener" and "Client()" use the "hmac"
module to provide digest authentication.

An authentication key is a byte string which can be thought of as a
password: once a connection is established both ends will demand proof
that the other knows the authentication key.  (Demonstrating that both
ends are using the same key does **not** involve sending the key over
the connection.)

If authentication is requested but no authentication key is specified
then the return value of "current_process().authkey" is used (see
"Process").  This value will be automatically inherited by any
"Process" object that the current process creates. This means that (by
default) all processes of a multi-process program will share a single
authentication key which can be used when setting up connections
between themselves.

Suitable authentication keys can also be generated by using
"os.urandom()".


Gerando logs
------------

Some support for logging is available.  Note, however, that the
"logging" package does not use process shared locks so it is possible
(depending on the handler type) for messages from different processes
to get mixed up.

multiprocessing.get_logger()

   Returns the logger used by "multiprocessing".  If necessary, a new
   one will be created.

   When first created the logger has level "logging.NOTSET" and no
   default handler. Messages sent to this logger will not by default
   propagate to the root logger.

   Note that on Windows child processes will only inherit the level of
   the parent process's logger -- any other customization of the
   logger will not be inherited.

multiprocessing.log_to_stderr()

   This function performs a call to "get_logger()" but in addition to
   returning the logger created by get_logger, it adds a handler which
   sends output to "sys.stderr" using format
   "'[%(levelname)s/%(processName)s] %(message)s'".

Below is an example session with logging turned on:

   >>> import multiprocessing, logging
   >>> logger = multiprocessing.log_to_stderr()
   >>> logger.setLevel(logging.INFO)
   >>> logger.warning('doomed')
   [WARNING/MainProcess] doomed
   >>> m = multiprocessing.Manager()
   [INFO/SyncManager-...] child process calling self.run()
   [INFO/SyncManager-...] created temp directory /.../pymp-...
   [INFO/SyncManager-...] manager serving at '/.../listener-...'
   >>> del m
   [INFO/MainProcess] sending shutdown message to manager
   [INFO/SyncManager-...] manager exiting with exitcode 0

For a full table of logging levels, see the "logging" module.


The "multiprocessing.dummy" module
----------------------------------

"multiprocessing.dummy" replicates the API of "multiprocessing" but is
no more than a wrapper around the "threading" module.

In particular, the "Pool" function provided by "multiprocessing.dummy"
returns an instance of "ThreadPool", which is a subclass of "Pool"
that supports all the same method calls but uses a pool of worker
threads rather than worker processes.

class multiprocessing.pool.ThreadPool([processes[, initializer[, initargs]]])

   A thread pool object which controls a pool of worker threads to
   which jobs can be submitted.  "ThreadPool" instances are fully
   interface compatible with "Pool" instances, and their resources
   must also be properly managed, either by using the pool as a
   context manager or by calling "close()" and "terminate()" manually.

   *processes* is the number of worker threads to use.  If *processes*
   is "None" then the number returned by "os.cpu_count()" is used.

   If *initializer* is not "None" then each worker process will call
   "initializer(*initargs)" when it starts.

   Unlike "Pool", *maxtasksperchild* and *context* cannot be provided.

      Nota:

        A "ThreadPool" shares the same interface as "Pool", which is
        designed around a pool of processes and predates the
        introduction of the "concurrent.futures" module.  As such, it
        inherits some operations that don't make sense for a pool
        backed by threads, and it has its own type for representing
        the status of asynchronous jobs, "AsyncResult", that is not
        understood by any other libraries.Users should generally
        prefer to use "concurrent.futures.ThreadPoolExecutor", which
        has a simpler interface that was designed around threads from
        the start, and which returns "concurrent.futures.Future"
        instances that are compatible with many other libraries,
        including "asyncio".


Programming guidelines
======================

There are certain guidelines and idioms which should be adhered to
when using "multiprocessing".


All start methods
-----------------

The following applies to all start methods.

Avoid shared state

   As far as possible one should try to avoid shifting large amounts
   of data between processes.

   It is probably best to stick to using queues or pipes for
   communication between processes rather than using the lower level
   synchronization primitives.

Picklability

   Ensure that the arguments to the methods of proxies are picklable.

Thread safety of proxies

   Do not use a proxy object from more than one thread unless you
   protect it with a lock.

   (There is never a problem with different processes using the *same*
   proxy.)

Joining zombie processes

   On Unix when a process finishes but has not been joined it becomes
   a zombie. There should never be very many because each time a new
   process starts (or "active_children()" is called) all completed
   processes which have not yet been joined will be joined.  Also
   calling a finished process's "Process.is_alive" will join the
   process.  Even so it is probably good practice to explicitly join
   all the processes that you start.

Better to inherit than pickle/unpickle

   When using the *spawn* or *forkserver* start methods many types
   from "multiprocessing" need to be picklable so that child processes
   can use them.  However, one should generally avoid sending shared
   objects to other processes using pipes or queues. Instead you
   should arrange the program so that a process which needs access to
   a shared resource created elsewhere can inherit it from an ancestor
   process.

Avoid terminating processes

   Using the "Process.terminate" method to stop a process is liable to
   cause any shared resources (such as locks, semaphores, pipes and
   queues) currently being used by the process to become broken or
   unavailable to other processes.

   Therefore it is probably best to only consider using
   "Process.terminate" on processes which never use any shared
   resources.

Joining processes that use queues

   Bear in mind that a process that has put items in a queue will wait
   before terminating until all the buffered items are fed by the
   "feeder" thread to the underlying pipe.  (The child process can
   call the "Queue.cancel_join_thread" method of the queue to avoid
   this behaviour.)

   This means that whenever you use a queue you need to make sure that
   all items which have been put on the queue will eventually be
   removed before the process is joined.  Otherwise you cannot be sure
   that processes which have put items on the queue will terminate.
   Remember also that non-daemonic processes will be joined
   automatically.

   An example which will deadlock is the following:

      from multiprocessing import Process, Queue

      def f(q):
          q.put('X' * 1000000)

      if __name__ == '__main__':
          queue = Queue()
          p = Process(target=f, args=(queue,))
          p.start()
          p.join()                    # this deadlocks
          obj = queue.get()

   A fix here would be to swap the last two lines (or simply remove
   the "p.join()" line).

Explicitly pass resources to child processes

   On Unix using the *fork* start method, a child process can make use
   of a shared resource created in a parent process using a global
   resource.  However, it is better to pass the object as an argument
   to the constructor for the child process.

   Apart from making the code (potentially) compatible with Windows
   and the other start methods this also ensures that as long as the
   child process is still alive the object will not be garbage
   collected in the parent process.  This might be important if some
   resource is freed when the object is garbage collected in the
   parent process.

   So for instance

      from multiprocessing import Process, Lock

      def f():
          ... do something using "lock" ...

      if __name__ == '__main__':
          lock = Lock()
          for i in range(10):
              Process(target=f).start()

   should be rewritten as

      from multiprocessing import Process, Lock

      def f(l):
          ... do something using "l" ...

      if __name__ == '__main__':
          lock = Lock()
          for i in range(10):
              Process(target=f, args=(lock,)).start()

Beware of replacing "sys.stdin" with a "file like object"

   "multiprocessing" originally unconditionally called:

      os.close(sys.stdin.fileno())

   in the "multiprocessing.Process._bootstrap()" method --- this
   resulted in issues with processes-in-processes. This has been
   changed to:

      sys.stdin.close()
      sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)

   Which solves the fundamental issue of processes colliding with each
   other resulting in a bad file descriptor error, but introduces a
   potential danger to applications which replace "sys.stdin()" with a
   "file-like object" with output buffering.  This danger is that if
   multiple processes call "close()" on this file-like object, it
   could result in the same data being flushed to the object multiple
   times, resulting in corruption.

   If you write a file-like object and implement your own caching, you
   can make it fork-safe by storing the pid whenever you append to the
   cache, and discarding the cache when the pid changes. For example:

      @property
      def cache(self):
          pid = os.getpid()
          if pid != self._pid:
              self._pid = pid
              self._cache = []
          return self._cache

   For more information, see bpo-5155, bpo-5313 and bpo-5331


The *spawn* and *forkserver* start methods
------------------------------------------

There are a few extra restriction which don't apply to the *fork*
start method.

More picklability

   Ensure that all arguments to "Process.__init__()" are picklable.
   Also, if you subclass "Process" then make sure that instances will
   be picklable when the "Process.start" method is called.

Global variables

   Bear in mind that if code run in a child process tries to access a
   global variable, then the value it sees (if any) may not be the
   same as the value in the parent process at the time that
   "Process.start" was called.

   However, global variables which are just module level constants
   cause no problems.

Safe importing of main module

   Make sure that the main module can be safely imported by a new
   Python interpreter without causing unintended side effects (such a
   starting a new process).

   For example, using the *spawn* or *forkserver* start method running
   the following module would fail with a "RuntimeError":

      from multiprocessing import Process

      def foo():
          print('hello')

      p = Process(target=foo)
      p.start()

   Instead one should protect the "entry point" of the program by
   using "if __name__ == '__main__':" as follows:

      from multiprocessing import Process, freeze_support, set_start_method

      def foo():
          print('hello')

      if __name__ == '__main__':
          freeze_support()
          set_start_method('spawn')
          p = Process(target=foo)
          p.start()

   (The "freeze_support()" line can be omitted if the program will be
   run normally instead of frozen.)

   This allows the newly spawned Python interpreter to safely import
   the module and then run the module's "foo()" function.

   Similar restrictions apply if a pool or manager is created in the
   main module.


Exemplos
========

Demonstration of how to create and use customized managers and
proxies:

   from multiprocessing import freeze_support
   from multiprocessing.managers import BaseManager, BaseProxy
   import operator

   ##

   class Foo:
       def f(self):
           print('you called Foo.f()')
       def g(self):
           print('you called Foo.g()')
       def _h(self):
           print('you called Foo._h()')

   # A simple generator function
   def baz():
       for i in range(10):
           yield i*i

   # Proxy type for generator objects
   class GeneratorProxy(BaseProxy):
       _exposed_ = ['__next__']
       def __iter__(self):
           return self
       def __next__(self):
           return self._callmethod('__next__')

   # Function to return the operator module
   def get_operator_module():
       return operator

   ##

   class MyManager(BaseManager):
       pass

   # register the Foo class; make `f()` and `g()` accessible via proxy
   MyManager.register('Foo1', Foo)

   # register the Foo class; make `g()` and `_h()` accessible via proxy
   MyManager.register('Foo2', Foo, exposed=('g', '_h'))

   # register the generator function baz; use `GeneratorProxy` to make proxies
   MyManager.register('baz', baz, proxytype=GeneratorProxy)

   # register get_operator_module(); make public functions accessible via proxy
   MyManager.register('operator', get_operator_module)

   ##

   def test():
       manager = MyManager()
       manager.start()

       print('-' * 20)

       f1 = manager.Foo1()
       f1.f()
       f1.g()
       assert not hasattr(f1, '_h')
       assert sorted(f1._exposed_) == sorted(['f', 'g'])

       print('-' * 20)

       f2 = manager.Foo2()
       f2.g()
       f2._h()
       assert not hasattr(f2, 'f')
       assert sorted(f2._exposed_) == sorted(['g', '_h'])

       print('-' * 20)

       it = manager.baz()
       for i in it:
           print('<%d>' % i, end=' ')
       print()

       print('-' * 20)

       op = manager.operator()
       print('op.add(23, 45) =', op.add(23, 45))
       print('op.pow(2, 94) =', op.pow(2, 94))
       print('op._exposed_ =', op._exposed_)

   ##

   if __name__ == '__main__':
       freeze_support()
       test()

Using "Pool":

   import multiprocessing
   import time
   import random
   import sys

   #
   # Functions used by test code
   #

   def calculate(func, args):
       result = func(*args)
       return '%s says that %s%s = %s' % (
           multiprocessing.current_process().name,
           func.__name__, args, result
           )

   def calculatestar(args):
       return calculate(*args)

   def mul(a, b):
       time.sleep(0.5 * random.random())
       return a * b

   def plus(a, b):
       time.sleep(0.5 * random.random())
       return a + b

   def f(x):
       return 1.0 / (x - 5.0)

   def pow3(x):
       return x ** 3

   def noop(x):
       pass

   #
   # Test code
   #

   def test():
       PROCESSES = 4
       print('Creating pool with %d processes\n' % PROCESSES)

       with multiprocessing.Pool(PROCESSES) as pool:
           #
           # Tests
           #

           TASKS = [(mul, (i, 7)) for i in range(10)] + \
                   [(plus, (i, 8)) for i in range(10)]

           results = [pool.apply_async(calculate, t) for t in TASKS]
           imap_it = pool.imap(calculatestar, TASKS)
           imap_unordered_it = pool.imap_unordered(calculatestar, TASKS)

           print('Ordered results using pool.apply_async():')
           for r in results:
               print('\t', r.get())
           print()

           print('Ordered results using pool.imap():')
           for x in imap_it:
               print('\t', x)
           print()

           print('Unordered results using pool.imap_unordered():')
           for x in imap_unordered_it:
               print('\t', x)
           print()

           print('Ordered results using pool.map() --- will block till complete:')
           for x in pool.map(calculatestar, TASKS):
               print('\t', x)
           print()

           #
           # Test error handling
           #

           print('Testing error handling:')

           try:
               print(pool.apply(f, (5,)))
           except ZeroDivisionError:
               print('\tGot ZeroDivisionError as expected from pool.apply()')
           else:
               raise AssertionError('expected ZeroDivisionError')

           try:
               print(pool.map(f, list(range(10))))
           except ZeroDivisionError:
               print('\tGot ZeroDivisionError as expected from pool.map()')
           else:
               raise AssertionError('expected ZeroDivisionError')

           try:
               print(list(pool.imap(f, list(range(10)))))
           except ZeroDivisionError:
               print('\tGot ZeroDivisionError as expected from list(pool.imap())')
           else:
               raise AssertionError('expected ZeroDivisionError')

           it = pool.imap(f, list(range(10)))
           for i in range(10):
               try:
                   x = next(it)
               except ZeroDivisionError:
                   if i == 5:
                       pass
               except StopIteration:
                   break
               else:
                   if i == 5:
                       raise AssertionError('expected ZeroDivisionError')

           assert i == 9
           print('\tGot ZeroDivisionError as expected from IMapIterator.next()')
           print()

           #
           # Testing timeouts
           #

           print('Testing ApplyResult.get() with timeout:', end=' ')
           res = pool.apply_async(calculate, TASKS[0])
           while 1:
               sys.stdout.flush()
               try:
                   sys.stdout.write('\n\t%s' % res.get(0.02))
                   break
               except multiprocessing.TimeoutError:
                   sys.stdout.write('.')
           print()
           print()

           print('Testing IMapIterator.next() with timeout:', end=' ')
           it = pool.imap(calculatestar, TASKS)
           while 1:
               sys.stdout.flush()
               try:
                   sys.stdout.write('\n\t%s' % it.next(0.02))
               except StopIteration:
                   break
               except multiprocessing.TimeoutError:
                   sys.stdout.write('.')
           print()
           print()


   if __name__ == '__main__':
       multiprocessing.freeze_support()
       test()

An example showing how to use queues to feed tasks to a collection of
worker processes and collect the results:

   import time
   import random

   from multiprocessing import Process, Queue, current_process, freeze_support

   #
   # Function run by worker processes
   #

   def worker(input, output):
       for func, args in iter(input.get, 'STOP'):
           result = calculate(func, args)
           output.put(result)

   #
   # Function used to calculate result
   #

   def calculate(func, args):
       result = func(*args)
       return '%s says that %s%s = %s' % \
           (current_process().name, func.__name__, args, result)

   #
   # Functions referenced by tasks
   #

   def mul(a, b):
       time.sleep(0.5*random.random())
       return a * b

   def plus(a, b):
       time.sleep(0.5*random.random())
       return a + b

   #
   #
   #

   def test():
       NUMBER_OF_PROCESSES = 4
       TASKS1 = [(mul, (i, 7)) for i in range(20)]
       TASKS2 = [(plus, (i, 8)) for i in range(10)]

       # Create queues
       task_queue = Queue()
       done_queue = Queue()

       # Submit tasks
       for task in TASKS1:
           task_queue.put(task)

       # Start worker processes
       for i in range(NUMBER_OF_PROCESSES):
           Process(target=worker, args=(task_queue, done_queue)).start()

       # Get and print results
       print('Unordered results:')
       for i in range(len(TASKS1)):
           print('\t', done_queue.get())

       # Add more tasks using `put()`
       for task in TASKS2:
           task_queue.put(task)

       # Get and print some more results
       for i in range(len(TASKS2)):
           print('\t', done_queue.get())

       # Tell child processes to stop
       for i in range(NUMBER_OF_PROCESSES):
           task_queue.put('STOP')


   if __name__ == '__main__':
       freeze_support()
       test()
