Wprowadzenie
************

Interfejs programowania aplikacji w Pythonie daje programistom języków
C i C++ dostęp do programu interpretującego polecenia języka
pytonowskiego na wielu poziomach. Sprzęg (API) jest równo użyteczny z
poziomu C++ ale dla porządku jest zwykle określany mianem sprzęgu
pomiędzy językami pytonowskim a C (z ang. - Python/C API). Istnieją
dwie zasadniczo różne przyczyny dla użycia interfejsu między językami
Python i C. Pierwszą przyczyną jest pisanie *modułów rozszerzających*
dla szczególnych powodów; są to moduły języka C, które rozszerzają
interpreter Pythona. To jest zwykle najczęstsze użycie. Drugą
przyczyną jest użycie Pythona jako komponentu większego programu; ta
technika jest zwykle określana mianem załączania - z ang. -
*embedding* w aplikacji.

Napisanie moduł rozszerzeń jest stosunkowo dobrze zrozumiałym
procesem, w którym dobrze sprawdza się podejście "książki
kucharskiej".  Istnieje kilka narzędzi, które w pewnym stopniu
automatyzują ten proces.  Ludzie wbudowali Pythona w innych aplikacja
od początku jego istnienia, proces wbudowania Pythona jest mniej
prosty niż pisanie rozszerzenia.

Wiele zadań sprzęgu (API) jest użytecznych niezależnie od tego czy
załączasz, czy też rozszerzasz program interpretujący język
pytonowski; co więcej, większość aplikacji które załącza program
interpretujący polecenia jezyka pytonowskiego potrzebuje także
szczególnych rozszerzeń, więc prawdopodobnie jest dobrym pomysłem
zaznajomienie się z pisaniem rozszerzenia przed próbą załączenia
języka pytonowskiego w prawdziwej aplikacji.


Standardy kodowania
===================

Jeśli piszesz C do umieszczenia w CPythonie **musisz** przestrzegać
wytycznych standardów zdefiniowanych w **PEP 7**.  Wytyczne te
obowiązują niezależnie od tego, do której wersji Pythona wnosisz swój
wkład.  Przestrzeganie tych konwencji nie jest konieczne dla własnych
rozszerzeń, chyba że ostatecznie spodziewasz się wnieść je do Python.


Pliki Włączania - z ang. Include
================================

Wszystkie zadania, definicje typu i makropoleceń konieczne do użycia
sprzęgu między językami pytonowskim i C są włączane do źródeł w kodzie
użytkownika przez następującą linijkę:

   #define PY_SSIZE_T_CLEAN
   #include <Python.h>

Oznacza to włączenie następujących standardowych nagłówek:
"<stdio.h>", "<string.h>", "<errno.h>", "<limits.h>", "<assert.h>" i
"<stdlib.h>" (jeśli są dostępne).

Informacja:

  Jako że Python może definiować pewne definicje preprocesora, które
  wpływają na pliki nagłówkowe na niektórych systemach, *musisz*
  załączyć plik "Python.h" przed jakimikolwiek standardowymi
  nagłówkami.Zaleca się, aby zawsze definiować "PY_SSIZE_T_CLEAN"
  przed włączeniem "Python.h".  Zobacz Pobieranie kolejnych rzeczy
  podanych na wejściu i konstruowanie wartości. dla opisu tego makro.

Wszystkie widoczne dla użytkownika nazwy określone w Python.h ( z
wyjątkiem tych określonych przez załączone standardowe pliki
nagłówkowe ) mają jeden z przedrostków "Py" lub "_Py". Nazwy
rozpoczynające się od "_Py" służą do wewnętrznego użytku przez
urzeczywistnienie programu interpretującego języka pytonowskiego i nie
powinno być używane przez piszących rozszerzenia. Nazwy członków
struktury nie mają zarezerwowanych przedrostków.

Informacja:

  User code should never define names that begin with "Py" or "_Py".
  This confuses the reader, and jeopardizes the portability of the
  user code to future Python versions, which may define additional
  names beginning with one of these prefixes.

The header files are typically installed with Python.  On Unix, these
are located in the directories "*prefix*/include/pythonversion/" and
"*exec_prefix*/include/pythonversion/", where "prefix" and
"exec_prefix" are defined by the corresponding parameters to Python's
**configure** script and *version* is "'%d.%d' %
sys.version_info[:2]".  On Windows, the headers are installed in
"*prefix*/include", where "prefix" is the installation directory
specified to the installer.

To include the headers, place both directories (if different) on your
compiler's search path for includes.  Do *not* place the parent
directories on the search path and then use "#include
<pythonX.Y/Python.h>"; this will break on multi-platform builds since
the platform independent headers under "prefix" include the platform
specific headers from "exec_prefix".

C++ users should note that although the API is defined entirely using
C, the header files properly declare the entry points to be "extern
"C"". As a result, there is no need to do anything special to use the
API from C++.


Useful macros
=============

Several useful macros are defined in the Python header files.  Many
are defined closer to where they are useful (e.g. "Py_RETURN_NONE").
Others of a more general utility are defined here.  This is not
necessarily a complete listing.

PyMODINIT_FUNC

   Declare an extension module "PyInit" initialization function. The
   function return type is PyObject*. The macro declares any special
   linkage declarations required by the platform, and for C++ declares
   the function as "extern "C"".

   The initialization function must be named "PyInit_*name*", where
   *name* is the name of the module, and should be the only
   non-"static" item defined in the module file. Example:

      static struct PyModuleDef spam_module = {
          PyModuleDef_HEAD_INIT,
          .m_name = "spam",
          ...
      };

      PyMODINIT_FUNC
      PyInit_spam(void)
      {
          return PyModule_Create(&spam_module);
      }

Py_ABS(x)

   Return the absolute value of "x".

   Nowe w wersji 3.3.

Py_ALWAYS_INLINE

   Ask the compiler to always inline a static inline function. The
   compiler can ignore it and decides to not inline the function.

   It can be used to inline performance critical static inline
   functions when building Python in debug mode with function inlining
   disabled. For example, MSC disables function inlining when building
   in debug mode.

   Marking blindly a static inline function with Py_ALWAYS_INLINE can
   result in worse performances (due to increased code size for
   example). The compiler is usually smarter than the developer for
   the cost/benefit analysis.

   If Python is built in debug mode (if the "Py_DEBUG" macro is
   defined), the "Py_ALWAYS_INLINE" macro does nothing.

   It must be specified before the function return type. Usage:

      static inline Py_ALWAYS_INLINE int random(void) { return 4; }

   Nowe w wersji 3.11.

Py_CHARMASK(c)

   Argument must be a character or an integer in the range [-128, 127]
   or [0, 255].  This macro returns "c" cast to an "unsigned char".

Py_DEPRECATED(version)

   Use this for deprecated declarations.  The macro must be placed
   before the symbol name.

   Przykład:

      Py_DEPRECATED(3.8) PyAPI_FUNC(int) Py_OldFunction(void);

   Zmienione w wersji 3.8: MSVC support was added.

Py_GETENV(s)

   Like "getenv(s)", but returns "NULL" if "-E" was passed on the
   command line (i.e. if "Py_IgnoreEnvironmentFlag" is set).

Py_MAX(x, y)

   Return the maximum value between "x" and "y".

   Nowe w wersji 3.3.

Py_MEMBER_SIZE(type, member)

   Return the size of a structure ("type") "member" in bytes.

   Nowe w wersji 3.6.

Py_MIN(x, y)

   Return the minimum value between "x" and "y".

   Nowe w wersji 3.3.

Py_NO_INLINE

   Disable inlining on a function. For example, it reduces the C stack
   consumption: useful on LTO+PGO builds which heavily inline code
   (see bpo-33720).

   Usage:

      Py_NO_INLINE static int random(void) { return 4; }

   Nowe w wersji 3.11.

Py_STRINGIFY(x)

   Convert "x" to a C string.  E.g. "Py_STRINGIFY(123)" returns
   ""123"".

   Nowe w wersji 3.4.

Py_UNREACHABLE()

   Use this when you have a code path that cannot be reached by
   design. For example, in the "default:" clause in a "switch"
   statement for which all possible values are covered in "case"
   statements.  Use this in places where you might be tempted to put
   an "assert(0)" or "abort()" call.

   In release mode, the macro helps the compiler to optimize the code,
   and avoids a warning about unreachable code.  For example, the
   macro is implemented with "__builtin_unreachable()" on GCC in
   release mode.

   A use for "Py_UNREACHABLE()" is following a call a function that
   never returns but that is not declared "_Py_NO_RETURN".

   If a code path is very unlikely code but can be reached under
   exceptional case, this macro must not be used.  For example, under
   low memory condition or if a system call returns a value out of the
   expected range.  In this case, it's better to report the error to
   the caller.  If the error cannot be reported to caller,
   "Py_FatalError()" can be used.

   Nowe w wersji 3.7.

Py_UNUSED(arg)

   Use this for unused arguments in a function definition to silence
   compiler warnings. Example: "int func(int a, int Py_UNUSED(b)) {
   return a; }".

   Nowe w wersji 3.4.

PyDoc_STRVAR(name, str)

   Creates a variable with name "name" that can be used in docstrings.
   If Python is built without docstrings, the value will be empty.

   Use "PyDoc_STRVAR" for docstrings to support building Python
   without docstrings, as specified in **PEP 7**.

   Przykład:

      PyDoc_STRVAR(pop_doc, "Remove and return the rightmost element.");

      static PyMethodDef deque_methods[] = {
          // ...
          {"pop", (PyCFunction)deque_pop, METH_NOARGS, pop_doc},
          // ...
      }

PyDoc_STR(str)

   Creates a docstring for the given input string or an empty string
   if docstrings are disabled.

   Use "PyDoc_STR" in specifying docstrings to support building Python
   without docstrings, as specified in **PEP 7**.

   Przykład:

      static PyMethodDef pysqlite_row_methods[] = {
          {"keys", (PyCFunction)pysqlite_row_keys, METH_NOARGS,
              PyDoc_STR("Returns the keys of the row.")},
          {NULL, NULL}
      };


Przedmioty, ich Rodzaje i Liczby Odwołań
========================================

Most Python/C API functions have one or more arguments as well as a
return value of type PyObject*.  This type is a pointer to an opaque
data type representing an arbitrary Python object.  Since all Python
object types are treated the same way by the Python language in most
situations (e.g., assignments, scope rules, and argument passing), it
is only fitting that they should be represented by a single C type.
Almost all Python objects live on the heap: you never declare an
automatic or static variable of type "PyObject", only pointer
variables of type PyObject* can  be declared.  The sole exception are
the type objects; since these must never be deallocated, they are
typically static "PyTypeObject" objects.

Wszystkie przedmioty języka pytonowskiego (nawet liczby całkowite
języka pytonowskiego) mają *rodzaj* i *liczbę odniesień*. Typ
przedmiotu określa jakiego rodzaju przedmiot to jest (np. liczba
całkowita, lista, lub zadanie zdefiniowane przez użytkownika; jest
wiele więcej jak wyjaśniono w The standard type hierarchy). Dla
każdego z dobrze-znanych rodzajów istnieje makropolecenie sprawdzające
czy przedmiot jest tego rodzaju; na przykład, "PyList_Check(a)" jest
prawdziwe wtedy (i tylko wtedy) gdy przedmiot na który wskazuje *a*
jest lista z języka pytonowskiego.


Liczby odniesień
----------------

The reference count is important because today's computers have a
finite (and often severely limited) memory size; it counts how many
different places there are that have a *strong reference* to an
object. Such a place could be another object, or a global (or static)
C variable, or a local variable in some C function. When the last
*strong reference* to an object is released (i.e. its reference count
becomes zero), the object is deallocated. If it contains references to
other objects, those references are released. Those other objects may
be deallocated in turn, if there are no more references to them, and
so on.  (There's an obvious problem  with objects that reference each
other here; for now, the solution is "don't do that.")

Reference counts are always manipulated explicitly.  The normal way is
to use the macro "Py_INCREF()" to take a new reference to an object
(i.e. increment its reference count by one), and "Py_DECREF()" to
release that reference (i.e. decrement the reference count by one).
The "Py_DECREF()" macro is considerably more complex than the incref
one, since it must check whether the reference count becomes zero and
then cause the object's deallocator to be called.  The deallocator is
a function pointer contained in the object's type structure.  The
type-specific deallocator takes care of releasing references for other
objects contained in the object if this is a compound object type,
such as a list, as well as performing any additional finalization
that's needed.  There's no chance that the reference count can
overflow; at least as many bits are used to hold the reference count
as there are distinct memory locations in virtual memory (assuming
"sizeof(Py_ssize_t) >= sizeof(void*)"). Thus, the reference count
increment is a simple operation.

It is not necessary to hold a *strong reference* (i.e. increment the
reference count) for every local variable that contains a pointer to
an object.  In theory, the  object's reference count goes up by one
when the variable is made to  point to it and it goes down by one when
the variable goes out of  scope.  However, these two cancel each other
out, so at the end the  reference count hasn't changed.  The only real
reason to use the  reference count is to prevent the object from being
deallocated as  long as our variable is pointing to it.  If we know
that there is at  least one other reference to the object that lives
at least as long as our variable, there is no need to take a new
*strong reference* (i.e. increment the reference count) temporarily.
An important situation where this arises is in objects  that are
passed as arguments to C functions in an extension module  that are
called from Python; the call mechanism guarantees to hold a  reference
to every argument for the duration of the call.

However, a common pitfall is to extract an object from a list and hold
on to it for a while without taking a new reference.  Some other
operation might conceivably remove the object from the list, releasing
that reference, and possibly deallocating it. The real danger is that
innocent-looking operations may invoke arbitrary Python code which
could do this; there is a code path which allows control to flow back
to the user from a "Py_DECREF()", so almost any operation is
potentially dangerous.

A safe approach is to always use the generic operations (functions
whose name begins with "PyObject_", "PyNumber_", "PySequence_" or
"PyMapping_"). These operations always create a new *strong reference*
(i.e. increment the reference count) of the object they return. This
leaves the caller with the responsibility to call "Py_DECREF()" when
they are done with the result; this soon becomes second nature.


Szczegóły Liczby Odniesień
~~~~~~~~~~~~~~~~~~~~~~~~~~

The reference count behavior of functions in the Python/C API is best
explained in terms of *ownership of references*.  Ownership pertains
to references, never to objects (objects are not owned: they are
always shared).  "Owning a reference" means being responsible for
calling Py_DECREF on it when the reference is no longer needed.
Ownership can also be transferred, meaning that the code that receives
ownership of the reference then becomes responsible for eventually
releasing it by calling "Py_DECREF()" or "Py_XDECREF()" when it's no
longer needed---or passing on this responsibility (usually to its
caller). When a function passes ownership of a reference on to its
caller, the caller is said to receive a *new* reference.  When no
ownership is transferred, the caller is said to *borrow* the
reference. Nothing needs to be done for a *borrowed reference*.

Idąc dalej, gdy wywołujące zadanie przekazuje odniesienie do
przedmiotu, istnieją dwie możliwości: zadanie *kradnie* odniesienie do
przedmiotu, lub nie kradnie go. *Kradnięcie odniesienia* oznacza, że
gdy przekazujesz odniesienie do zadania, to zadanie przyjmuje, że
teraz ono posiada odniesienie i nie jesteś za nie odpowiedzialny ani
chwili dłużej.

Few functions steal references; the two notable exceptions are
"PyList_SetItem()" and "PyTuple_SetItem()", which  steal a reference
to the item (but not to the tuple or list into which the item is
put!).  These functions were designed to steal a reference because of
a common idiom for populating a tuple or list with newly created
objects; for example, the code to create the tuple "(1, 2, "three")"
could look like this (forgetting about error handling for the moment;
a better way to code this is shown below):

   PyObject *t;

   t = PyTuple_New(3);
   PyTuple_SetItem(t, 0, PyLong_FromLong(1L));
   PyTuple_SetItem(t, 1, PyLong_FromLong(2L));
   PyTuple_SetItem(t, 2, PyUnicode_FromString("three"));

Here, "PyLong_FromLong()" returns a new reference which is immediately
stolen by "PyTuple_SetItem()".  When you want to keep using an object
although the reference to it will be stolen, use "Py_INCREF()" to grab
another reference before calling the reference-stealing function.

Incidentally, "PyTuple_SetItem()" is the *only* way to set tuple
items; "PySequence_SetItem()" and "PyObject_SetItem()" refuse to do
this since tuples are an immutable data type.  You should only use
"PyTuple_SetItem()" for tuples that you are creating yourself.

Equivalent code for populating a list can be written using
"PyList_New()" and "PyList_SetItem()".

However, in practice, you will rarely use these ways of creating and
populating a tuple or list.  There's a generic function,
"Py_BuildValue()", that can create most common objects from C values,
directed by a *format string*. For example, the above two blocks of
code could be replaced by the following (which also takes care of the
error checking):

   PyObject *tuple, *list;

   tuple = Py_BuildValue("(iis)", 1, 2, "three");
   list = Py_BuildValue("[iis]", 1, 2, "three");

It is much more common to use "PyObject_SetItem()" and friends with
items whose references you are only borrowing, like arguments that
were passed in to the function you are writing.  In that case, their
behaviour regarding references is much saner, since you don't have to
take a new reference just so you can give that reference away ("have
it be stolen").  For example, this function sets all items of a list
(actually, any mutable sequence) to a given item:

   int
   set_all(PyObject *target, PyObject *item)
   {
       Py_ssize_t i, n;

       n = PyObject_Length(target);
       if (n < 0)
           return -1;
       for (i = 0; i < n; i++) {
           PyObject *index = PyLong_FromSsize_t(i);
           if (!index)
               return -1;
           if (PyObject_SetItem(target, index, item) < 0) {
               Py_DECREF(index);
               return -1;
           }
           Py_DECREF(index);
       }
       return 0;
   }

The situation is slightly different for function return values.
While passing a reference to most functions does not change your
ownership responsibilities for that reference, many functions that
return a reference to an object give you ownership of the reference.
The reason is simple: in many cases, the returned object is created
on the fly, and the reference you get is the only reference to the
object.  Therefore, the generic functions that return object
references, like "PyObject_GetItem()" and  "PySequence_GetItem()",
always return a new reference (the caller becomes the owner of the
reference).

It is important to realize that whether you own a reference returned
by a function depends on which function you call only --- *the
plumage* (the type of the object passed as an argument to the
function) *doesn't enter into it!* Thus, if you  extract an item from
a list using "PyList_GetItem()", you don't own the reference --- but
if you obtain the same item from the same list using
"PySequence_GetItem()" (which happens to take exactly the same
arguments), you do own a reference to the returned object.

Here is an example of how you could write a function that computes the
sum of the items in a list of integers; once using
"PyList_GetItem()", and once using "PySequence_GetItem()".

   long
   sum_list(PyObject *list)
   {
       Py_ssize_t i, n;
       long total = 0, value;
       PyObject *item;

       n = PyList_Size(list);
       if (n < 0)
           return -1; /* Not a list */
       for (i = 0; i < n; i++) {
           item = PyList_GetItem(list, i); /* Can't fail */
           if (!PyLong_Check(item)) continue; /* Skip non-integers */
           value = PyLong_AsLong(item);
           if (value == -1 && PyErr_Occurred())
               /* Integer too big to fit in a C long, bail out */
               return -1;
           total += value;
       }
       return total;
   }

   long
   sum_sequence(PyObject *sequence)
   {
       Py_ssize_t i, n;
       long total = 0, value;
       PyObject *item;
       n = PySequence_Length(sequence);
       if (n < 0)
           return -1; /* Has no length */
       for (i = 0; i < n; i++) {
           item = PySequence_GetItem(sequence, i);
           if (item == NULL)
               return -1; /* Not a sequence, or other failure */
           if (PyLong_Check(item)) {
               value = PyLong_AsLong(item);
               Py_DECREF(item);
               if (value == -1 && PyErr_Occurred())
                   /* Integer too big to fit in a C long, bail out */
                   return -1;
               total += value;
           }
           else {
               Py_DECREF(item); /* Discard reference ownership */
           }
       }
       return total;
   }


Typy
----

There are few other data types that play a significant role in  the
Python/C API; most are simple C types such as int,  long, double and
char*.  A few structure types  are used to describe static tables used
to list the functions exported  by a module or the data attributes of
a new object type, and another is used to describe the value of a
complex number.  These will  be discussed together with the functions
that use them.

type Py_ssize_t
    * Część stabilnego ABI.*

   A signed integral type such that "sizeof(Py_ssize_t) ==
   sizeof(size_t)". C99 doesn't define such a thing directly (size_t
   is an unsigned integral type). See **PEP 353** for details.
   "PY_SSIZE_T_MAX" is the largest positive value of type
   "Py_ssize_t".


Wyjątki
=======

Programujący komputer w języku pytonowskim musi sobie zaprzątać głowę
tylko sytuacjami wyjątkowymi tylko jeśli szczególna obsługa błędów
jest konieczna; Nieobsłużone wyjątki są automatycznie przesyłane do
zadania wywołującego, potem do zadania które wywołało tamto zadanie, i
tak dalej, dopóki nie natrafi na program interpretujący najwyższego
poziomu, gdzie są przekazywane użytkownikowi wraz z wypisem kolejnych
wywołań odłożonych na stercie.

For C programmers, however, error checking always has to be explicit.
All functions in the Python/C API can raise exceptions, unless an
explicit claim is made otherwise in a function's documentation.  In
general, when a function encounters an error, it sets an exception,
discards any object references that it owns, and returns an error
indicator.  If not documented otherwise, this indicator is either
"NULL" or "-1", depending on the function's return type. A few
functions return a Boolean true/false result, with false indicating an
error.  Very few functions return no explicit error indicator or have
an ambiguous return value, and require explicit testing for errors
with "PyErr_Occurred()".  These exceptions are always explicitly
documented.

Exception state is maintained in per-thread storage (this is
equivalent to using global storage in an unthreaded application).  A
thread can be in one of two states: an exception has occurred, or not.
The function "PyErr_Occurred()" can be used to check for this: it
returns a borrowed reference to the exception type object when an
exception has occurred, and "NULL" otherwise.  There are a number of
functions to set the exception state: "PyErr_SetString()" is the most
common (though not the most general) function to set the exception
state, and "PyErr_Clear()" clears the exception state.

The full exception state consists of three objects (all of which can
be "NULL"): the exception type, the corresponding exception  value,
and the traceback.  These have the same meanings as the Python result
of "sys.exc_info()"; however, they are not the same: the Python
objects represent the last exception being handled by a Python  "try"
... "except" statement, while the C level exception state only exists
while an exception is being passed on between C functions until it
reaches the Python bytecode interpreter's  main loop, which takes care
of transferring it to "sys.exc_info()" and friends.

Zauważ że poczynając od języka pytonowskiego w wersji 1.5 preferowaną,
bezpiecznym dla wątków sposobem na dostęp do stanu wyjątku z poziomu
kodu napisanego w języku pytonowskim jest wezwanie zadania
"sys.exc_info()", które zwraca określony-dla-wątku stan wyjątku dla
kodu napisanego w języku pytonowskim. Poza tym składnia obu sposobów
na dostęp do stanu sytuacji wyjątkowej zmieniła się tak, że zadanie
które złapie wyjątek zachowa i przywróci swój stan wyjątku tak, aby
zachować stan wyjątku wywołujacego zadanie. To działanie zapobiega
typowym błędom w obsłudze sytuacji wyjątkowych powodowanych przez
niewinnie-wyglądające zadania nadpisujące sytuacje wyjątkowe które
aktualnie są obsługiwane; to także redukuje często niechciane
wydłużanie czasu życia przedmiotów do których odnosi się ramka stosu w
wypisie śladu wywołań.

Jako nadrzędną zasadę, przyjmuje się że zadanie które wywołuje inne
zadanie do wykonania pewnych operacji powinno sprawdzić czy wywołane
zadanie zgłosiło wyjątek, a jeśli tak, to przekazać stan wyjątku do
wywołującego. Powinno też odrzucić jakiekolwiek odniesienia do
przedmiotów, które posiada, i zwrócić sygnalizator błędu, ale nie
powinno ustawiać innego wyjątku --- który nadpisywałby wyjątek, który
właśnie został zgłoszony i tracić istotne informacje o dokładnym
powodzie błędu.

A simple example of detecting exceptions and passing them on is shown
in the "sum_sequence()" example above.  It so happens that this
example doesn't need to clean up any owned references when it detects
an error.  The following example function shows some error cleanup.
First, to remind you why you like Python, we show the equivalent
Python code:

   def incr_item(dict, key):
       try:
           item = dict[key]
       except KeyError:
           item = 0
       dict[key] = item + 1

Tu następuje odpowiadający kod w języku C, w całej pełni okazałości:

   int
   incr_item(PyObject *dict, PyObject *key)
   {
       /* Objects all initialized to NULL for Py_XDECREF */
       PyObject *item = NULL, *const_one = NULL, *incremented_item = NULL;
       int rv = -1; /* Return value initialized to -1 (failure) */

       item = PyObject_GetItem(dict, key);
       if (item == NULL) {
           /* Handle KeyError only: */
           if (!PyErr_ExceptionMatches(PyExc_KeyError))
               goto error;

           /* Clear the error and use zero: */
           PyErr_Clear();
           item = PyLong_FromLong(0L);
           if (item == NULL)
               goto error;
       }
       const_one = PyLong_FromLong(1L);
       if (const_one == NULL)
           goto error;

       incremented_item = PyNumber_Add(item, const_one);
       if (incremented_item == NULL)
           goto error;

       if (PyObject_SetItem(dict, key, incremented_item) < 0)
           goto error;
       rv = 0; /* Success */
       /* Continue with cleanup code */

    error:
       /* Cleanup code, shared by success and failure path */

       /* Use Py_XDECREF() to ignore NULL references */
       Py_XDECREF(item);
       Py_XDECREF(const_one);
       Py_XDECREF(incremented_item);

       return rv; /* -1 for error, 0 for success */
   }

This example represents an endorsed use of the "goto" statement  in C!
It illustrates the use of "PyErr_ExceptionMatches()" and
"PyErr_Clear()" to handle specific exceptions, and the use of
"Py_XDECREF()" to dispose of owned references that may be "NULL" (note
the "'X'" in the name; "Py_DECREF()" would crash when confronted with
a "NULL" reference).  It is important that the variables used to hold
owned references are initialized to "NULL" for this to work; likewise,
the proposed return value is initialized to "-1" (failure) and only
set to success after the final call made is successful.


Załączanie programu interpretującego język pytonowski
=====================================================

Jedno istotne zadanie, o które załączający (w przeciwieństwie do
piszących rozszerzenia) program interpretujący język pytonowski muszą
się martwić jest zainicjowanie i prawdopodobne zakończenie programu
interpretującego polecenia języka pytonowskiego. Większość
użyteczności programu interpretującego polecenia języka pytonowskiego
może tylko być użyta po jego zainicjowaniu.

The basic initialization function is "Py_Initialize()". This
initializes the table of loaded modules, and creates the fundamental
modules "builtins", "__main__", and "sys".  It also initializes the
module search path ("sys.path").

"Py_Initialize()" does not set the "script argument list"
("sys.argv"). If this variable is needed by Python code that will be
executed later, setting "PyConfig.argv" and "PyConfig.parse_argv" must
be set: see Python Initialization Configuration.

On most systems (in particular, on Unix and Windows, although the
details are slightly different), "Py_Initialize()" calculates the
module search path based upon its best guess for the location of the
standard Python interpreter executable, assuming that the Python
library is found in a fixed location relative to the Python
interpreter executable.  In particular, it looks for a directory named
"lib/python*X.Y*" relative to the parent directory where the
executable named "python" is found on the shell command search path
(the environment variable "PATH").

Na przykład, jeśli plik wykonywalny programu interpretującego
polecenia języka pytonowskiego znajduje się w katalogu
"/usr/local/bin/python", będzie zakładał, że biblioteki są w katalogu
"/usr/local/lib/python*X.Y*" (Faktycznie, ta szczególna ścieżka jest
także "ratunkowym" położeniem, używanym gdy żaden plik wykonywalny
nazwany "python" nie znajdzie się w katalogach znajdujących się w
zmiennej środowiskowej "PATH".) Użytkownik może podmienić to
zachowanie przez ustawienie zmiennej środowiskowej "PYTHONHOME", lub
wstawić dodatkowe katalogi przed sztandarową ścieżką przez ustawienie
zmiennej środowiskowej "PYTHONPATH".

The embedding application can steer the search by calling
"Py_SetProgramName(file)" *before* calling  "Py_Initialize()".  Note
that "PYTHONHOME" still overrides this and "PYTHONPATH" is still
inserted in front of the standard path.  An application that requires
total control has to provide its own implementation of "Py_GetPath()",
"Py_GetPrefix()", "Py_GetExecPrefix()", and "Py_GetProgramFullPath()"
(all defined in "Modules/getpath.c").

Sometimes, it is desirable to "uninitialize" Python.  For instance,
the application may want to start over (make another call to
"Py_Initialize()") or the application is simply done with its  use of
Python and wants to free memory allocated by Python.  This can be
accomplished by calling "Py_FinalizeEx()".  The function
"Py_IsInitialized()" returns true if Python is currently in the
initialized state.  More information about these functions is given in
a later chapter. Notice that "Py_FinalizeEx()" does *not* free all
memory allocated by the Python interpreter, e.g. memory allocated by
extension modules currently cannot be released.


Odpluskwiające Budowy
=====================

Program interpretujący język pytonowski może być zbudowany z kilkoma
makropoleceniami do załączenia dodatkowych sprawdzeń programu
interpretującego polecenia języka pytonowskiego i modułów
rozszerzających. Te sprawdzenia mają zwyczaj dodawać duży narzut czasu
wykonania poleceń programu więc nie są załączane domyślnie.

A full list of the various types of debugging builds is in the file
"Misc/SpecialBuilds.txt" in the Python source distribution. Builds are
available that support tracing of reference counts, debugging the
memory allocator, or low-level profiling of the main interpreter loop.
Only the most frequently used builds will be described in the
remainder of this section.

Py_DEBUG

Compiling the interpreter with the "Py_DEBUG" macro defined produces
what is generally meant by a debug build of Python. "Py_DEBUG" is
enabled in the Unix build by adding "--with-pydebug" to the
"./configure" command. It is also implied by the presence of the not-
Python-specific "_DEBUG" macro.  When "Py_DEBUG" is enabled in the
Unix build, compiler optimization is disabled.

In addition to the reference count debugging described below, extra
checks are performed, see Python Debug Build.

Defining "Py_TRACE_REFS" enables reference tracing (see the "configure
--with-trace-refs option"). When defined, a circular doubly linked
list of active objects is maintained by adding two extra fields to
every "PyObject".  Total allocations are tracked as well.  Upon exit,
all existing references are printed.  (In interactive mode this
happens after every statement run by the interpreter.)

Odwołaj się do "Misc/SpecialBuilds.txt" w źródłowym pakiecie języka
pytonowskiego po więcej szczegółów.
