编程常见问题
************


一般问题
========


Python 有没有提供断点与单步调试等功能的，源码层次的调试器？
-----------------------------------------------------------

有的。

以下介绍了一些 Python 的调试器，内置函数 "breakpoint()" 允许你使用其中
的任何一种。

pdb 模块是一个简单但是够用的控制台模式 Python 调试器。 它是标准 Python
库的一部分，并且 "已收录于库参考手册"。 你也可以通过使用 pdb 代码作为
样例来编写你自己的调试器。

作为标准 Python 发行版附带组件的 IDLE 交互式环境（通常位于
Tools/scripts/idle）中包含一个图形化的调试器。

PythonWin 是一个包含有基于 pdb 的 GUI 调试器的 Python IDE。 Pythonwin
调试器会为断点加上颜色，并具有许多很棒的特性，例如也可以非 Pythonwin
程序。 Pythonwin 是 Python for Windows Extensions 项目的一部分，也是
ActivePython 发行版的一部分（参见
https://www.activestate.com/activepython）。

Boa Constructor 是一个使用wxWidgets的IDE和GUI构建器。它提供可视化框架
创建和操作，对象检查器，源对象浏览器上的许多视图，继承层次结构，doc字
符串生成的html文档，高级调试器，集成帮助和Zope支持。

Eric 是一个基于PyQt和Scintilla编辑组件构建的IDE。

Pydb是标准Python调试器pdb的一个版本，经过修改后可与DDD（数据显示调试器
）一起使用，DDD是一种流行的图形化调试器前端。 Pydb可以在
http://bashdb.sourceforge.net/pydb/ 找到，DDD可以在
https://www.gnu.org/software/ddd 找到。

有许多商业Python IDE包括图形调试器。他们包括：

* Wing IDE (https://wingware.com/)

* Komodo IDE (https://komodoide.com/)

* PyCharm (https://www.jetbrains.com/pycharm/)


有没有工具来帮助找寻漏洞或进行静态分析？
----------------------------------------

有的。

PyChecker 是一个寻找Python代码漏洞以及对代码复杂性和风格给出警告的工具
。你可以从这里获得PyChecker: http://pychecker.sourceforge.net/ 。

Pylint 是另一个检查模块是否满足编码标准的工具，也可以编写插件来添加自
定义功能。除了PyChecker 执行的错误检查之外， Pylint 还提供了一些额外的
功能，例如检查行长度，变量名称是否根据您的编码标准格式良好，声明的接口
是否完全实现等等。 https://docs.pylint.org/ 提供了Pylint功能的完整列表
。

静态类型检查器，例如 Mypy 、 Pyre 和 Pytype 可以检查Python源代码中的类
型提示。


我如何能够通过一个 Python 脚本创建一个独立运行的二进制文件？
------------------------------------------------------------

如果你想要的只是一个独立的程序，用户可以下载和运行而不必先安装Python发
行版，你就不需要将Python编译成C代码。有许多工具可以确定程序所需的模块
集，并将这些模块与Python二进制文件绑定在一起以生成单个可执行文件。

一种是使用冻结工具，它包含在Python源代码树 "Tools/freeze" 中。它将
Python字节代码转换为C数组；一个C编译器，你可以将所有模块嵌入到一个新程
序中，然后将其与标准Python模块链接。

它的工作原理是递归扫描源代码以获取import语句（两种形式），并在标准
Python路径和源目录（用于内置模块）中查找模块。 然后，它将用Python编写
的模块的字节码转换为C代码（可以使用编组模块转换为代码对象的数组初始化
器），并创建一个定制的配置文件，该文件仅包含程序中实际使用的内置模块。
然后，它编译生成的C代码并将其与Python解释器的其余部分链接，以形成一个
独立的二进制文件，其行为与你的脚本完全相同。

显然， freeze 需要一个C编译器。有几个其他实用工具不需要。 一个是Thomas
Heller的py2exe（仅限Windows）

   http://www.py2exe.org/

另一个工具是 Anthony Tuininga 的 cx_Freeze。


是否有编程标准或Python程序的样式指南？
--------------------------------------

是。标准库模块所要求的编码样式文档为 **PEP 8** 。


核心语言
========


当变量有值时，为什么会出现UnboundLocalError？
---------------------------------------------

通过在函数体中的某处添加赋值语句，导致以前正常工作的代码被修改而得到
UnboundLocalError 会令人感到意外。

以下代码：

>>> x = 10
>>> def bar():
...     print(x)
>>> bar()
10

正常工作，但是以下代码

>>> x = 10
>>> def foo():
...     print(x)
...     x += 1

会得到一个 UnboundLocalError ：

>>> foo()
Traceback (most recent call last):
  ...
UnboundLocalError: local variable 'x' referenced before assignment

这是因为当你对作用域中的变量进行赋值时，该变量将成为该作用域的局部变量
，并在外部作用域中隐藏任何类似命名的变量。由于foo中的最后一个语句为
"x" 分配了一个新值，编译器会将其识别为局部变量。因此，当先前的
"print(x)" 尝试打印未初始化的局部变量时会导致错误。

在上面的示例中，你可以通过将其声明为全局来访问外部作用域变量：

>>> x = 10
>>> def foobar():
...     global x
...     print(x)
...     x += 1
>>> foobar()
10

这个显式声明是必需的，以便提醒你（与类和实例变量的表面类似情况不同），
你实际上是在外部作用域中修改变量的值

>>> print(x)
11

你可以使用 "nonlocal" 关键字在嵌套作用域中执行类似的操作：

>>> def foo():
...    x = 10
...    def bar():
...        nonlocal x
...        print(x)
...        x += 1
...    bar()
...    print(x)
>>> foo()
10
11


Python中的局部变量和全局变量有哪些规则？
----------------------------------------

在Python中，仅在函数内引用的变量是隐式全局变量。如果在函数体内的任何位
置为变量赋值，则除非明确声明为全局，否则将其视为局部值。

虽然起初有点令人惊讶，但片刻考虑就可以解释。一方面，要求 "global" 表示
已分配的变量可以防止意外的副作用。另一方面，如果所有全局引用都需要
"global" ，那么你一直都在使用 "global" 。你必须将对内置函数或导入模块
的组件的每个引用声明为全局。这种杂乱会破坏 "global" 声明用于识别副作用
的有用性。


为什么在具有不同值的循环中定义的lambdas都返回相同的结果？
---------------------------------------------------------

假设你使用for循环来定义几个不同的 lambda （甚至是普通函数），例如：:

   >>> squares = []
   >>> for x in range(5):
   ...     squares.append(lambda: x**2)

这给你一个包含5个lambdas的列表，它们计算 "x**2" 。你可能会期望，当它们
被调用时，它们将分别返回 "0" 、 "1" 、 "4" 、 "9" 和 "16" 。但是，当你
真正尝试时，你会看到它们都返回 "16" 。:

   >>> squares[2]()
   16
   >>> squares[4]()
   16

发生这种情况是因为 "x" 不是lambdas的内部变量，而是在外部作用域中定义，
并且在调用lambda时访问它 - 而不是在定义它时。 在循环结束时， "x" 的值
是 "4" ，所以所有的函数现在返回 "4**2" ，即 "16" 。你还可以通过更改
"x" 的值来验证这一点，并查看lambdas的结果如何变化:

   >>> x = 8
   >>> squares[2]()
   64

为了避免这种情况，你需要将值保存在lambdas的局部变量中，这样它们就不依
赖于全局``x`` 的值

   >>> squares = []
   >>> for x in range(5):
   ...     squares.append(lambda n=x: n**2)

这里， "n=x" 在lambda本地创建一个新的变量 "n" ，并在定义lambda时计算，
使它具有与 "x" 在循环中该点相同的值。这意味着 "n" 的值在第一个lambda中
为 "0" ，在第二个lambda中为 "1" ，在第三个中为 "2" ，依此类推。因此每
个lambda现在将返回正确的结果:

   >>> squares[2]()
   4
   >>> squares[4]()
   16

请注意，这种行为并不是lambda所特有的，但也适用于常规函数。


如何跨模块共享全局变量？
------------------------

在单个程序中跨模块共享信息的规范方法是创建一个特殊模块（通常称为config
或cfg）。只需在应用程序的所有模块中导入配置模块；然后该模块可用作全局
名称。因为每个模块只有一个实例，所以对模块对象所做的任何更改都会在任何
地方反映出来。 例如：

config.py:

   x = 0   # Default value of the 'x' configuration setting

mod.py:

   import config
   config.x = 1

main.py:

   import config
   import mod
   print(config.x)

请注意，出于同样的原因，使用模块也是实现Singleton设计模式的基础。


导入模块的“最佳实践”是什么？
----------------------------

通常，不要使用 "from modulename import *" 。这样做会使导入器的命名空间
变得混乱，并且使得连接器更难以检测未定义的名称。

在文件的顶部导入模块。这样做可以清楚地了解代码所需的其他模块，并避免了
模块名称是否在范围内的问题。每行导入一个模块可以轻松添加和删除导入的模
块，但每行导入多个模块会占用更少的屏幕空间。

如果按以下顺序导入模块，这是一种很好的做法：

1. 标准库模块 -- 例如： "sys", "os", "getopt", "re"

2. 第三方库模块（安装在Python的site-packages目录中的任何内容） --
   例 如mx.DateTime，ZODB，PIL.Image等

3. 本地开发的模块

有时需要将模块导入语句移动到函数或类里面，以避免循环导入问题。Gordon
McMillan 说：

   当两个模块都使用  "import <module>" 的导入形式时，循环导入就可以了
   。但是当第二个模块想从第一个模块中获取一个名称(“来自模块导入名称”)
   并且导入位于顶层时，就会出错。这是因为第一个模块中的名称还不可用，
   因为第一个模块正在忙着导入第二个模块。

在这种情况下，如果第二个模块仅用于一个函数，则可以轻松地将模块导入语句
移动到该函数中。调用导入时，第一个模块将完成初始化，第二个模块可以进行
导入。

如果某些模块是特定于平台的，则可能还需要将模块导入语句移出顶级代码。在
这种情况下，甚至可能无法导入文件顶部的所有模块。在这种情况下，在相应的
特定于平台的代码中导入正确的模块是一个很好的选择。

只有当需要解决诸如避免循环导入或试图减少模块初始化时间的问题时，才可以
将导入移动到本地范围，例如在函数定义中。如果根据程序的执行方式，许多导
入是不必要的，这种技术尤其有用。如果仅在某个函数中使用模块，您还可能希
望将导入移到该函数中。请注意，第一次加载模块可能会因为模块的一次初始化
而代价高昂，但多次加载模块实际上是免费的，只需进行几次字典查找。即使模
块名称超出了作用域，模块也可能在 "sys.modules" 中可用。


为什么对象之间共享默认值？
--------------------------

这种类型的缺陷通常会惹恼新手程序员。考虑这个函数

   def foo(mydict={}):  # Danger: shared reference to one dict for all calls
       ... compute something ...
       mydict[key] = value
       return mydict

第一次调用此函数时，"mydict" 包含一项。第二次，"mydict" 包含两项，因为
当 "foo()" 开始执行时， "mydict" 中已经有一项了。

函数调用经常被期望为默认值创建新的对象。 但实际情况并非如此。 默认值会
在函数定义时一次性地创建。 如果对象发生改变，就如本示例中的字典那样，
则对函数的后续调用将会引用这个被改变的对象。

按照定义，不可变对象例如数字、字符串、元组和 "None" 因为不可变所以是安
全的。 对可变对象例如字典、列表和类实例的改变则可能造成迷惑。

Because of this feature, it is good programming practice to not use
mutable objects as default values.  Instead, use "None" as the default
value and inside the function, check if the parameter is "None" and
create a new list/dictionary/whatever if it is.  For example, don't
write:

   def foo(mydict={}):
       ...

但是:

   def foo(mydict=None):
       if mydict is None:
           mydict = {}  # create a new dict for local namespace

This feature can be useful.  When you have a function that's time-
consuming to compute, a common technique is to cache the parameters
and the resulting value of each call to the function, and return the
cached value if the same value is requested again.  This is called
"memoizing", and can be implemented like this:

   # Callers can only provide two parameters and optionally pass _cache by keyword
   def expensive(arg1, arg2, *, _cache={}):
       if (arg1, arg2) in _cache:
           return _cache[(arg1, arg2)]

       # Calculate the value
       result = ... expensive computation ...
       _cache[(arg1, arg2)] = result           # Store result in the cache
       return result

You could use a global variable containing a dictionary instead of the
default value; it's a matter of taste.


如何将可选参数或关键字参数从一个函数传递到另一个函数？
------------------------------------------------------

使用函数参数列表中的 "*" 和 "**" 说明符收集参数;这会将位置参数作为元组
，将关键字参数作为字典。然后，您可以使用 "*" 和 "**" 调用另一个函数时
传递这些参数：

   def f(x, *args, **kwargs):
       ...
       kwargs['width'] = '14.3c'
       ...
       g(x, *args, **kwargs)


形参和实参之间有什么区别？
--------------------------

*Parameters* are defined by the names that appear in a function
definition, whereas *arguments* are the values actually passed to a
function when calling it.  Parameters define what types of arguments a
function can accept.  For example, given the function definition:

   def func(foo, bar=None, **kwargs):
       pass

*foo*, *bar* 和 *kwargs* 是 "func" 的参数。但是，在调用 "func" 时，例
如：

   func(42, bar=314, extra=somevar)

the values "42", "314", and "somevar" are arguments.


为什么更改列表'y'也会更改列表'x'？
----------------------------------

如果你编写的代码就像下面一样：

   >>> x = []
   >>> y = x
   >>> y.append(10)
   >>> y
   [10]
   >>> x
   [10]

你可能想知道为什么追加一个元素也改变了x。

产生这种结果有两个因素：

1. Variables are simply names that refer to objects.  Doing "y = x"
   doesn't create a copy of the list -- it creates a new variable "y"
   that refers to the same object "x" refers to.  This means that
   there is only one object (the list), and both "x" and "y" refer to
   it.

2. Lists are *mutable*, which means that you can change their
   content.

After the call to "append()", the content of the mutable object has
changed from "[]" to "[10]".  Since both the variables refer to the
same object, using either name accesses the modified value "[10]".

如果我们改为将不可变对象分配给 "x":

   >>> x = 5  # ints are immutable
   >>> y = x
   >>> x = x + 1  # 5 can't be mutated, we are creating a new object here
   >>> x
   6
   >>> y
   5

we can see that in this case "x" and "y" are not equal anymore.  This
is because integers are *immutable*, and when we do "x = x + 1" we are
not mutating the int "5" by incrementing its value; instead, we are
creating a new object (the int "6") and assigning it to "x" (that is,
changing which object "x" refers to).  After this assignment we have
two objects (the ints "6" and "5") and two variables that refer to
them ("x" now refers to "6" but "y" still refers to "5").

Some operations (for example "y.append(10)" and "y.sort()") mutate the
object, whereas superficially similar operations (for example "y = y +
[10]" and "sorted(y)") create a new object.  In general in Python (and
in all cases in the standard library) a method that mutates an object
will return "None" to help avoid getting the two types of operations
confused.  So if you mistakenly write "y.sort()" thinking it will give
you a sorted copy of "y", you'll instead end up with "None", which
will likely cause your program to generate an easily diagnosed error.

However, there is one class of operations where the same operation
sometimes has different behaviors with different types:  the augmented
assignment operators.  For example, "+=" mutates lists but not tuples
or ints ("a_list += [1, 2, 3]" is equivalent to "a_list.extend([1, 2,
3])" and mutates "a_list", whereas "some_tuple += (1, 2, 3)" and
"some_int += 1" create new objects).

换而言之：

* If we have a mutable object ("list", "dict", "set", etc.), we can
  use some specific operations to mutate it and all the variables that
  refer to it will see the change.

* If we have an immutable object ("str", "int", "tuple", etc.), all
  the variables that refer to it will always see the same value, but
  operations that transform that value into a new value always return
  a new object.

If you want to know if two variables refer to the same object or not,
you can use the "is" operator, or the built-in function "id()".


如何编写带输出参数的函数（通过引用调用）？
------------------------------------------

Remember that arguments are passed by assignment in Python.  Since
assignment just creates references to objects, there's no alias
between an argument name in the caller and callee, and so no call-by-
reference per se.  You can achieve the desired effect in a number of
ways.

1. By returning a tuple of the results:

      def func2(a, b):
          a = 'new-value'        # a and b are local names
          b = b + 1              # assigned to new objects
          return a, b            # return new values

      x, y = 'old-value', 99
      x, y = func2(x, y)
      print(x, y)                # output: new-value 100

   This is almost always the clearest solution.

2. By using global variables.  This isn't thread-safe, and is not
   recommended.

3. By passing a mutable (changeable in-place) object:

      def func1(a):
          a[0] = 'new-value'     # 'a' references a mutable list
          a[1] = a[1] + 1        # changes a shared object

      args = ['old-value', 99]
      func1(args)
      print(args[0], args[1])    # output: new-value 100

4. By passing in a dictionary that gets mutated:

      def func3(args):
          args['a'] = 'new-value'     # args is a mutable dictionary
          args['b'] = args['b'] + 1   # change it in-place

      args = {'a': 'old-value', 'b': 99}
      func3(args)
      print(args['a'], args['b'])

5. Or bundle up values in a class instance:

      class callByRef:
          def __init__(self, /, **args):
              for key, value in args.items():
                  setattr(self, key, value)

      def func4(args):
          args.a = 'new-value'        # args is a mutable callByRef
          args.b = args.b + 1         # change object in-place

      args = callByRef(a='old-value', b=99)
      func4(args)
      print(args.a, args.b)

   There's almost never a good reason to get this complicated.

Your best choice is to return a tuple containing the multiple results.


如何在Python中创建高阶函数？
----------------------------

You have two choices: you can use nested scopes or you can use
callable objects. For example, suppose you wanted to define
"linear(a,b)" which returns a function "f(x)" that computes the value
"a*x+b".  Using nested scopes:

   def linear(a, b):
       def result(x):
           return a * x + b
       return result

Or using a callable object:

   class linear:

       def __init__(self, a, b):
           self.a, self.b = a, b

       def __call__(self, x):
           return self.a * x + self.b

In both cases,

   taxes = linear(0.3, 2)

gives a callable object where "taxes(10e6) == 0.3 * 10e6 + 2".

The callable object approach has the disadvantage that it is a bit
slower and results in slightly longer code.  However, note that a
collection of callables can share their signature via inheritance:

   class exponential(linear):
       # __init__ inherited
       def __call__(self, x):
           return self.a * (x ** self.b)

Object can encapsulate state for several methods:

   class counter:

       value = 0

       def set(self, x):
           self.value = x

       def up(self):
           self.value = self.value + 1

       def down(self):
           self.value = self.value - 1

   count = counter()
   inc, dec, reset = count.up, count.down, count.set

Here "inc()", "dec()" and "reset()" act like functions which share the
same counting variable.


如何在Python中复制对象？
------------------------

In general, try "copy.copy()" or "copy.deepcopy()" for the general
case. Not all objects can be copied, but most can.

Some objects can be copied more easily.  Dictionaries have a "copy()"
method:

   newdict = olddict.copy()

Sequences can be copied by slicing:

   new_l = l[:]


如何找到对象的方法或属性？
--------------------------

For an instance x of a user-defined class, "dir(x)" returns an
alphabetized list of the names containing the instance attributes and
methods and attributes defined by its class.


我的代码如何才能发现对象的名称？
--------------------------------

Generally speaking, it can't, because objects don't really have names.
Essentially, assignment always binds a name to a value; The same is
true of "def" and "class" statements, but in that case the value is a
callable. Consider the following code:

   >>> class A:
   ...     pass
   ...
   >>> B = A
   >>> a = B()
   >>> b = a
   >>> print(b)
   <__main__.A object at 0x16D07CC>
   >>> print(a)
   <__main__.A object at 0x16D07CC>

Arguably the class has a name: even though it is bound to two names
and invoked through the name B the created instance is still reported
as an instance of class A.  However, it is impossible to say whether
the instance's name is a or b, since both names are bound to the same
value.

Generally speaking it should not be necessary for your code to "know
the names" of particular values. Unless you are deliberately writing
introspective programs, this is usually an indication that a change of
approach might be beneficial.

In comp.lang.python, Fredrik Lundh once gave an excellent analogy in
answer to this question:

   The same way as you get the name of that cat you found on your
   porch: the cat (object) itself cannot tell you its name, and it
   doesn't really care -- so the only way to find out what it's called
   is to ask all your neighbours (namespaces) if it's their cat
   (object)...

   ....and don't be surprised if you'll find that it's known by many
   names, or no name at all!


逗号运算符的优先级是什么？
--------------------------

Comma is not an operator in Python.  Consider this session:

   >>> "a" in "b", "a"
   (False, 'a')

Since the comma is not an operator, but a separator between
expressions the above is evaluated as if you had entered:

   ("a" in "b"), "a"

not:

   "a" in ("b", "a")

The same is true of the various assignment operators ("=", "+=" etc).
They are not truly operators but syntactic delimiters in assignment
statements.


Is there an equivalent of C's "?:" ternary operator?
----------------------------------------------------

Yes, there is. The syntax is as follows:

   [on_true] if [expression] else [on_false]

   x, y = 50, 25
   small = x if x < y else y

Before this syntax was introduced in Python 2.5, a common idiom was to
use logical operators:

   [expression] and [on_true] or [on_false]

However, this idiom is unsafe, as it can give wrong results when
*on_true* has a false boolean value.  Therefore, it is always better
to use the "... if ... else ..." form.


Is it possible to write obfuscated one-liners in Python?
--------------------------------------------------------

Yes.  Usually this is done by nesting "lambda" within "lambda".  See
the following three examples, due to Ulf Bartelt:

   from functools import reduce

   # Primes < 1000
   print(list(filter(None,map(lambda y:y*reduce(lambda x,y:x*y!=0,
   map(lambda x,y=y:y%x,range(2,int(pow(y,0.5)+1))),1),range(2,1000)))))

   # First 10 Fibonacci numbers
   print(list(map(lambda x,f=lambda x,f:(f(x-1,f)+f(x-2,f)) if x>1 else 1:
   f(x,f), range(10))))

   # Mandelbrot set
   print((lambda Ru,Ro,Iu,Io,IM,Sx,Sy:reduce(lambda x,y:x+y,map(lambda y,
   Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,Sy=Sy,L=lambda yc,Iu=Iu,Io=Io,Ru=Ru,Ro=Ro,i=IM,
   Sx=Sx,Sy=Sy:reduce(lambda x,y:x+y,map(lambda x,xc=Ru,yc=yc,Ru=Ru,Ro=Ro,
   i=i,Sx=Sx,F=lambda xc,yc,x,y,k,f=lambda xc,yc,x,y,k,f:(k<=0)or (x*x+y*y
   >=4.0) or 1+f(xc,yc,x*x-y*y+xc,2.0*x*y+yc,k-1,f):f(xc,yc,x,y,k,f):chr(
   64+F(Ru+x*(Ro-Ru)/Sx,yc,0,0,i)),range(Sx))):L(Iu+y*(Io-Iu)/Sy),range(Sy
   ))))(-2.1, 0.7, -1.2, 1.2, 30, 80, 24))
   #    \___ ___/  \___ ___/  |   |   |__ lines on screen
   #        V          V      |   |______ columns on screen
   #        |          |      |__________ maximum of "iterations"
   #        |          |_________________ range on y axis
   #        |____________________________ range on x axis

Don't try this at home, kids!


函数参数列表中的斜杠（/）是什么意思？
-------------------------------------

A slash in the argument list of a function denotes that the parameters
prior to it are positional-only.  Positional-only parameters are the
ones without an externally-usable name.  Upon calling a function that
accepts positional-only parameters, arguments are mapped to parameters
based solely on their position. For example, "pow()" is a function
that accepts positional-only parameters. Its documentation looks like
this:

   >>> help(pow)
   Help on built-in function pow in module builtins:

   pow(x, y, z=None, /)
      Equivalent to x**y (with two arguments) or x**y % z (with three arguments)

      Some types, such as ints, are able to use a more efficient algorithm when
      invoked using the three argument form.

The slash at the end of the parameter list means that all three
parameters are positional-only. Thus, calling "pow()" with keyword
arguments would lead to an error:

   >>> pow(x=3, y=4)
   Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
   TypeError: pow() takes no keyword arguments

Note that as of this writing this is only documentational and no valid
syntax in Python, although there is **PEP 570**, which proposes a
syntax for position-only parameters in Python.


数字和字符串
============


如何指定十六进制和八进制整数？
------------------------------

To specify an octal digit, precede the octal value with a zero, and
then a lower or uppercase "o".  For example, to set the variable "a"
to the octal value "10" (8 in decimal), type:

   >>> a = 0o10
   >>> a
   8

Hexadecimal is just as easy.  Simply precede the hexadecimal number
with a zero, and then a lower or uppercase "x".  Hexadecimal digits
can be specified in lower or uppercase.  For example, in the Python
interpreter:

   >>> a = 0xa5
   >>> a
   165
   >>> b = 0XB2
   >>> b
   178


为什么-22 // 10返回-3？
-----------------------

It's primarily driven by the desire that "i % j" have the same sign as
"j". If you want that, and also want:

   i == (i // j) * j + (i % j)

then integer division has to return the floor.  C also requires that
identity to hold, and then compilers that truncate "i // j" need to
make "i % j" have the same sign as "i".

There are few real use cases for "i % j" when "j" is negative.  When
"j" is positive, there are many, and in virtually all of them it's
more useful for "i % j" to be ">= 0".  If the clock says 10 now, what
did it say 200 hours ago?  "-190 % 12 == 2" is useful; "-190 % 12 ==
-10" is a bug waiting to bite.


如何将字符串转换为数字？
------------------------

For integers, use the built-in "int()" type constructor, e.g.
"int('144') == 144".  Similarly, "float()" converts to floating-point,
e.g. "float('144') == 144.0".

By default, these interpret the number as decimal, so that
"int('0144') == 144" and "int('0x144')" raises "ValueError".
"int(string, base)" takes the base to convert from as a second
optional argument, so "int('0x144', 16) == 324".  If the base is
specified as 0, the number is interpreted using Python's rules: a
leading '0o' indicates octal, and '0x' indicates a hex number.

Do not use the built-in function "eval()" if all you need is to
convert strings to numbers.  "eval()" will be significantly slower and
it presents a security risk: someone could pass you a Python
expression that might have unwanted side effects.  For example,
someone could pass "__import__('os').system("rm -rf $HOME")" which
would erase your home directory.

"eval()" also has the effect of interpreting numbers as Python
expressions, so that e.g. "eval('09')" gives a syntax error because
Python does not allow leading '0' in a decimal number (except '0').


如何将数字转换为字符串？
------------------------

To convert, e.g., the number 144 to the string '144', use the built-in
type constructor "str()".  If you want a hexadecimal or octal
representation, use the built-in functions "hex()" or "oct()".  For
fancy formatting, see the 格式化字符串字面值 and 格式字符串语法
sections, e.g. ""{:04d}".format(144)" yields "'0144'" and
""{:.3f}".format(1.0/3.0)" yields "'0.333'".


如何修改字符串？
----------------

You can't, because strings are immutable.  In most situations, you
should simply construct a new string from the various parts you want
to assemble it from.  However, if you need an object with the ability
to modify in-place unicode data, try using an "io.StringIO" object or
the "array" module:

   >>> import io
   >>> s = "Hello, world"
   >>> sio = io.StringIO(s)
   >>> sio.getvalue()
   'Hello, world'
   >>> sio.seek(7)
   7
   >>> sio.write("there!")
   6
   >>> sio.getvalue()
   'Hello, there!'

   >>> import array
   >>> a = array.array('u', s)
   >>> print(a)
   array('u', 'Hello, world')
   >>> a[0] = 'y'
   >>> print(a)
   array('u', 'yello, world')
   >>> a.tounicode()
   'yello, world'


如何使用字符串调用函数/方法？
-----------------------------

有各种技术。

* The best is to use a dictionary that maps strings to functions.
  The primary advantage of this technique is that the strings do not
  need to match the names of the functions.  This is also the primary
  technique used to emulate a case construct:

     def a():
         pass

     def b():
         pass

     dispatch = {'go': a, 'stop': b}  # Note lack of parens for funcs

     dispatch[get_input()]()  # Note trailing parens to call function

* 使用内置函数 "getattr()"

     import foo
     getattr(foo, 'bar')()

  Note that "getattr()" works on any object, including classes, class
  instances, modules, and so on.

  This is used in several places in the standard library, like this:

     class Foo:
         def do_foo(self):
             ...

         def do_bar(self):
             ...

     f = getattr(foo_instance, 'do_' + opname)
     f()

* Use "locals()" or "eval()" to resolve the function name:

     def myFunc():
         print("hello")

     fname = "myFunc"

     f = locals()[fname]
     f()

     f = eval(fname)
     f()

  Note: Using "eval()" is slow and dangerous.  If you don't have
  absolute control over the contents of the string, someone could pass
  a string that resulted in an arbitrary function being executed.


是否有与Perl的chomp()等效的方法，用于从字符串中删除尾随换行符？
---------------------------------------------------------------

You can use "S.rstrip("\r\n")" to remove all occurrences of any line
terminator from the end of the string "S" without removing other
trailing whitespace.  If the string "S" represents more than one line,
with several empty lines at the end, the line terminators for all the
blank lines will be removed:

   >>> lines = ("line 1 \r\n"
   ...          "\r\n"
   ...          "\r\n")
   >>> lines.rstrip("\n\r")
   'line 1 '

Since this is typically only desired when reading text one line at a
time, using "S.rstrip()" this way works well.


Is there a scanf() or sscanf() equivalent?
------------------------------------------

不是这样的。

For simple input parsing, the easiest approach is usually to split the
line into whitespace-delimited words using the "split()" method of
string objects and then convert decimal strings to numeric values
using "int()" or "float()".  "split()" supports an optional "sep"
parameter which is useful if the line uses something other than
whitespace as a separator.

For more complicated input parsing, regular expressions are more
powerful than C's "sscanf()" and better suited for the task.


'UnicodeDecodeError' 或 'UnicodeEncodeError' 错误是什么意思？
-------------------------------------------------------------

见 Unicode 指南


性能
====


我的程序太慢了。该如何加快速度？
--------------------------------

总的来说，这是个棘手的问题。首先，下面列出了深入了解前需要记住的事情：

* 不同的Python实现具有不同的性能特征。本常见问题解答的重点是
  *CPython* 。

* 行为可能因操作系统而异，尤其是在谈论 I / O 或多线程时。

* 在尝试优化任何代码 *前* ，应始终找到程序中的热点（请参阅 "profile"
  模块）。

* 编写基准脚本将允许您在搜索改进时快速迭代（请参阅 "timeit" 模块）。

* 强烈建议在可能引入隐藏在复杂优化中的回归之前，要有良好的代码覆盖率
  （ 通过单元测试或任何其他技术）。

话虽如此，加速Python代码有很多技巧。以下是一些可以达到可接受的性能水平
的一般原则：

* 使您的算法更快（或更改为更快的算法）可以产生比尝试在代码中使用微优
  化 技巧更大的好处。

* 使用正确的数据结构。参考文档 内置类型 和 "collections" 模块。

* 当标准库提供用于执行某些操作的原语时，可能（尽管不能保证）比您可能
  提 出的任何替代方案更快。对于用C编写的原语，例如内置函数和一些扩展类
  型 ，这是真的。例如，请确保使用 "list.sort()" 内置方法或相关的
  "sorted()" 函数进行排序（有关适度高级用法的示例，请参阅 排序指南 ）
  。

* 抽象倾向于创造间接性并迫使翻译更多地工作。如果间接级别超过完成的有
  用 工作量，则程序将变慢。你应该避免过度抽象，特别是在微小的功能或方
  法的 形式下（这通常也会对可读性产生不利影响）。

如果你已经达到纯Python允许的限制，那么有一些工具可以让你走得更远。例如
， Cython 可以将稍微修改的Python代码版本编译为C扩展，并且可以在许多不
同的平台上使用。 Cython可以利用编译（和可选的类型注释）来使代码明显快
于解释时的速度。如果您对C编程技能有信心，也可以自己编写 write a C
extension module 。

参见: 专门介绍 性能提示 的wiki页面。


将多个字符串连接在一起的最有效方法是什么？
------------------------------------------

"str" and "bytes" objects are immutable, therefore concatenating many
strings together is inefficient as each concatenation creates a new
object.  In the general case, the total runtime cost is quadratic in
the total string length.

To accumulate many "str" objects, the recommended idiom is to place
them into a list and call "str.join()" at the end:

   chunks = []
   for s in my_strings:
       chunks.append(s)
   result = ''.join(chunks)

（另一个合理有效的习惯用法是 "io.StringIO" ）

To accumulate many "bytes" objects, the recommended idiom is to extend
a "bytearray" object using in-place concatenation (the "+=" operator):

   result = bytearray()
   for b in my_bytes_objects:
       result += b


序列（元组/列表）
=================


如何在元组和列表之间进行转换？
------------------------------

The type constructor "tuple(seq)" converts any sequence (actually, any
iterable) into a tuple with the same items in the same order.

For example, "tuple([1, 2, 3])" yields "(1, 2, 3)" and "tuple('abc')"
yields "('a', 'b', 'c')".  If the argument is a tuple, it does not
make a copy but returns the same object, so it is cheap to call
"tuple()" when you aren't sure that an object is already a tuple.

The type constructor "list(seq)" converts any sequence or iterable
into a list with the same items in the same order.  For example,
"list((1, 2, 3))" yields "[1, 2, 3]" and "list('abc')" yields "['a',
'b', 'c']".  If the argument is a list, it makes a copy just like
"seq[:]" would.


什么是负数序号？
----------------

Python sequences are indexed with positive numbers and negative
numbers.  For positive numbers 0 is the first index 1 is the second
index and so forth.  For negative indices -1 is the last index and -2
is the penultimate (next to last) index and so forth.  Think of
"seq[-n]" as the same as "seq[len(seq)-n]".

Using negative indices can be very convenient.  For example "S[:-1]"
is all of the string except for its last character, which is useful
for removing the trailing newline from a string.


如何以相反的顺序迭代序列？
--------------------------

使用 "reversed()" 内置函数，这是Python 2.4中的新功能:

   for x in reversed(sequence):
       ...  # do something with x ...

这不会修改您的原始序列，而是构建一个反向顺序的新副本以进行迭代。

在 Python 2.3 里，您可以使用扩展切片语法:

   for x in sequence[::-1]:
       ...  # do something with x ...


如何从列表中删除重复项？
------------------------

有关执行此操作的许多方法的详细讨论，请参阅 Python Cookbook:

   https://code.activestate.com/recipes/52560/

如果您不介意重新排序列表，请对其进行排序，然后从列表末尾进行扫描，删除
重复项：

   if mylist:
       mylist.sort()
       last = mylist[-1]
       for i in range(len(mylist)-2, -1, -1):
           if last == mylist[i]:
               del mylist[i]
           else:
               last = mylist[i]

如果列表的所有元素都可以用作设置键（即：它们都是 *hashable* ），这通常
会更快:

   mylist = list(set(mylist))

这会将列表转换为集合，从而删除重复项，然后返回到列表中。


你如何在Python中创建一个数组？
------------------------------

使用列表:

   ["this", 1, "is", "an", "array"]

Lists are equivalent to C or Pascal arrays in their time complexity;
the primary difference is that a Python list can contain objects of
many different types.

The "array" module also provides methods for creating arrays of fixed
types with compact representations, but they are slower to index than
lists.  Also note that the Numeric extensions and others define array-
like structures with various characteristics as well.

To get Lisp-style linked lists, you can emulate cons cells using
tuples:

   lisp_list = ("like",  ("this",  ("example", None) ) )

If mutability is desired, you could use lists instead of tuples.  Here
the analogue of lisp car is "lisp_list[0]" and the analogue of cdr is
"lisp_list[1]".  Only do this if you're sure you really need to,
because it's usually a lot slower than using Python lists.


如何创建多维列表？
------------------

你可能试图制作一个像这样的多维数组:

   >>> A = [[None] * 2] * 3

如果你打印它，看起来是正确的：

   >>> A
   [[None, None], [None, None], [None, None]]

但是，当您分配值时，它会显示在多个位置：

   >>> A[0][0] = 5
   >>> A
   [[5, None], [5, None], [5, None]]

The reason is that replicating a list with "*" doesn't create copies,
it only creates references to the existing objects.  The "*3" creates
a list containing 3 references to the same list of length two.
Changes to one row will show in all rows, which is almost certainly
not what you want.

The suggested approach is to create a list of the desired length first
and then fill in each element with a newly created list:

   A = [None] * 3
   for i in range(3):
       A[i] = [None] * 2

This generates a list containing 3 different lists of length two.  You
can also use a list comprehension:

   w, h = 2, 3
   A = [[None] * w for i in range(h)]

Or, you can use an extension that provides a matrix datatype; NumPy is
the best known.


如何将方法应用于一系列对象？
----------------------------

使用列表理解：

   result = [obj.method() for obj in mylist]


为什么在添加 _tuple[i]+=[‘item’] 时引发异常？
---------------------------------------------

This is because of a combination of the fact that augmented assignment
operators are *assignment* operators, and the difference between
mutable and immutable objects in Python.

This discussion applies in general when augmented assignment operators
are applied to elements of a tuple that point to mutable objects, but
we'll use a "list" and "+=" as our exemplar.

如果你写成：

   >>> a_tuple = (1, 2)
   >>> a_tuple[0] += 1
   Traceback (most recent call last):
      ...
   TypeError: 'tuple' object does not support item assignment

The reason for the exception should be immediately clear: "1" is added
to the object "a_tuple[0]" points to ("1"), producing the result
object, "2", but when we attempt to assign the result of the
computation, "2", to element "0" of the tuple, we get an error because
we can't change what an element of a tuple points to.

Under the covers, what this augmented assignment statement is doing is
approximately this:

   >>> result = a_tuple[0] + 1
   >>> a_tuple[0] = result
   Traceback (most recent call last):
     ...
   TypeError: 'tuple' object does not support item assignment

It is the assignment part of the operation that produces the error,
since a tuple is immutable.

When you write something like:

   >>> a_tuple = (['foo'], 'bar')
   >>> a_tuple[0] += ['item']
   Traceback (most recent call last):
     ...
   TypeError: 'tuple' object does not support item assignment

The exception is a bit more surprising, and even more surprising is
the fact that even though there was an error, the append worked:

   >>> a_tuple[0]
   ['foo', 'item']

To see why this happens, you need to know that (a) if an object
implements an "__iadd__" magic method, it gets called when the "+="
augmented assignment is executed, and its return value is what gets
used in the assignment statement; and (b) for lists, "__iadd__" is
equivalent to calling "extend" on the list and returning the list.
That's why we say that for lists, "+=" is a "shorthand" for
"list.extend":

   >>> a_list = []
   >>> a_list += [1]
   >>> a_list
   [1]

This is equivalent to:

   >>> result = a_list.__iadd__([1])
   >>> a_list = result

The object pointed to by a_list has been mutated, and the pointer to
the mutated object is assigned back to "a_list".  The end result of
the assignment is a no-op, since it is a pointer to the same object
that "a_list" was previously pointing to, but the assignment still
happens.

Thus, in our tuple example what is happening is equivalent to:

   >>> result = a_tuple[0].__iadd__(['item'])
   >>> a_tuple[0] = result
   Traceback (most recent call last):
     ...
   TypeError: 'tuple' object does not support item assignment

The "__iadd__" succeeds, and thus the list is extended, but even
though "result" points to the same object that "a_tuple[0]" already
points to, that final assignment still results in an error, because
tuples are immutable.


我想做一个复杂的排序：你能用Python做一个Schwartzian变换吗？
-----------------------------------------------------------

该技术归功于Perl社区的 Randal Schwartz，它通过将每个元素映射到其 "排序
值（sort value）" 的度量对列表中的元素进行排序。在Python中，使用
"list.sort()" 方法的 "key" 参数：

   Isorted = L[:]
   Isorted.sort(key=lambda s: int(s[10:15]))


如何按其他列表中的值对一个列表进行排序？
----------------------------------------

将它们合并到元组的迭代器中，对结果列表进行排序，然后选择所需的元素。

   >>> list1 = ["what", "I'm", "sorting", "by"]
   >>> list2 = ["something", "else", "to", "sort"]
   >>> pairs = zip(list1, list2)
   >>> pairs = sorted(pairs)
   >>> pairs
   [("I'm", 'else'), ('by', 'sort'), ('sorting', 'to'), ('what', 'something')]
   >>> result = [x[1] for x in pairs]
   >>> result
   ['else', 'sort', 'to', 'something']

最后一步的替代方案是:

   >>> result = []
   >>> for p in pairs: result.append(p[1])

If you find this more legible, you might prefer to use this instead of
the final list comprehension.  However, it is almost twice as slow for
long lists.  Why? First, the "append()" operation has to reallocate
memory, and while it uses some tricks to avoid doing that each time,
it still has to do it occasionally, and that costs quite a bit.
Second, the expression "result.append" requires an extra attribute
lookup, and third, there's a speed reduction from having to make all
those function calls.


对象
====


什么是类？
----------

"类" 是通过执行类语句创建的特定对象类型。"类对象" 被当作模板来创建实例
对象，实例对象包含了特定于数据类型的数据（属性）和代码（方法）。

类可以基于一个或多个的其他类，称之为基类（ES），它继承基类的属性和方法
，这样就可以通过继承来连续地细化对象模型。例如：您可能有一个 "Mailbox"
类提供邮箱的基本访问方法.，它的子类 "MboxMailbox", "MaildirMailbox",
"OutlookMailbox" 用于处理各种特定邮箱格式。


什么是方法？
------------

"方法" 实际上就是类定义中的函数。对于某个对象 "x" 上的方法，通常称为
"x.name(arguments...)" 。

   class C:
       def meth(self, arg):
           return arg * 2 + self.attribute


什么是 self ？
--------------

Self 只是 "方法" 的第一个参数的常规名称。例如：对于某个类的某个实例
"x" ，其方法 "meth(self, a, b, c)" 实际上应该被称为 "x.meth(a, b, c)"
；对于被调用的方法会被称为 "meth(x, a, b, c)" 。

另请参阅 为什么必须在方法定义和调用中显式使用“self”？ 。


如何检查对象是否是给定类或其子类的实例？
----------------------------------------

Use the built-in function "isinstance(obj, cls)".  You can check if an
object is an instance of any of a number of classes by providing a
tuple instead of a single class, e.g. "isinstance(obj, (class1,
class2, ...))", and can also check whether an object is one of
Python's built-in types, e.g. "isinstance(obj, str)" or
"isinstance(obj, (int, float, complex))".

Note that most programs do not use "isinstance()" on user-defined
classes very often.  If you are developing the classes yourself, a
more proper object-oriented style is to define methods on the classes
that encapsulate a particular behaviour, instead of checking the
object's class and doing a different thing based on what class it is.
For example, if you have a function that does something:

   def search(obj):
       if isinstance(obj, Mailbox):
           ...  # code to search a mailbox
       elif isinstance(obj, Document):
           ...  # code to search a document
       elif ...

A better approach is to define a "search()" method on all the classes
and just call it:

   class Mailbox:
       def search(self):
           ...  # code to search a mailbox

   class Document:
       def search(self):
           ...  # code to search a document

   obj.search()


什么是代理？
------------

Delegation is an object oriented technique (also called a design
pattern). Let's say you have an object "x" and want to change the
behaviour of just one of its methods.  You can create a new class that
provides a new implementation of the method you're interested in
changing and delegates all other methods to the corresponding method
of "x".

Python programmers can easily implement delegation.  For example, the
following class implements a class that behaves like a file but
converts all written data to uppercase:

   class UpperOut:

       def __init__(self, outfile):
           self._outfile = outfile

       def write(self, s):
           self._outfile.write(s.upper())

       def __getattr__(self, name):
           return getattr(self._outfile, name)

Here the "UpperOut" class redefines the "write()" method to convert
the argument string to uppercase before calling the underlying
"self.__outfile.write()" method.  All other methods are delegated to
the underlying "self.__outfile" object.  The delegation is
accomplished via the "__getattr__" method; consult the language
reference for more information about controlling attribute access.

Note that for more general cases delegation can get trickier. When
attributes must be set as well as retrieved, the class must define a
"__setattr__()" method too, and it must do so carefully.  The basic
implementation of "__setattr__()" is roughly equivalent to the
following:

   class X:
       ...
       def __setattr__(self, name, value):
           self.__dict__[name] = value
       ...

Most "__setattr__()" implementations must modify "self.__dict__" to
store local state for self without causing an infinite recursion.


如何从覆盖基类的派生类调用基类中定义的方法?
-------------------------------------------

Use the built-in "super()" function:

   class Derived(Base):
       def meth(self):
           super(Derived, self).meth()

For version prior to 3.0, you may be using classic classes: For a
class definition such as "class Derived(Base): ..." you can call
method "meth()" defined in "Base" (or one of "Base"'s base classes) as
"Base.meth(self, arguments...)".  Here, "Base.meth" is an unbound
method, so you need to provide the "self" argument.


如何组织代码以便更改基类？
--------------------------

You could define an alias for the base class, assign the real base
class to it before your class definition, and use the alias throughout
your class.  Then all you have to change is the value assigned to the
alias.  Incidentally, this trick is also handy if you want to decide
dynamically (e.g. depending on availability of resources) which base
class to use.  Example:

   BaseAlias = <real base class>

   class Derived(BaseAlias):
       def meth(self):
           BaseAlias.meth(self)
           ...


如何创建静态类数据和静态类方法？
--------------------------------

Both static data and static methods (in the sense of C++ or Java) are
supported in Python.

For static data, simply define a class attribute.  To assign a new
value to the attribute, you have to explicitly use the class name in
the assignment:

   class C:
       count = 0   # number of times C.__init__ called

       def __init__(self):
           C.count = C.count + 1

       def getcount(self):
           return C.count  # or return self.count

"c.count" also refers to "C.count" for any "c" such that
"isinstance(c, C)" holds, unless overridden by "c" itself or by some
class on the base-class search path from "c.__class__" back to "C".

Caution: within a method of C, an assignment like "self.count = 42"
creates a new and unrelated instance named "count" in "self"'s own
dict.  Rebinding of a class-static data name must always specify the
class whether inside a method or not:

   C.count = 314

Static methods are possible:

   class C:
       @staticmethod
       def static(arg1, arg2, arg3):
           # No 'self' parameter!
           ...

However, a far more straightforward way to get the effect of a static
method is via a simple module-level function:

   def getcount():
       return C.count

If your code is structured so as to define one class (or tightly
related class hierarchy) per module, this supplies the desired
encapsulation.


如何在Python中重载构造函数（或方法）？
--------------------------------------

这个答案实际上适用于所有方法，但问题通常首先出现在构造函数的上下文中。

在C ++中，你会这样写

   class C {
       C() { cout << "No arguments\n"; }
       C(int i) { cout << "Argument is " << i << "\n"; }
   }

在Python中，您必须编写一个构造函数，使用默认参数捕获所有情况。例如：

   class C:
       def __init__(self, i=None):
           if i is None:
               print("No arguments")
           else:
               print("Argument is", i)

这不完全等同，但在实践中足够接近。

你也可以尝试一个可变长度的参数列表，例如:

   def __init__(self, *args):
       ...

相同的方法适用于所有方法定义。


我尝试使用 __spam ，但是得到一个关于 _SomeClassName__spam 的错误信息。
----------------------------------------------------------------------

Variable names with double leading underscores are "mangled" to
provide a simple but effective way to define class private variables.
Any identifier of the form "__spam" (at least two leading underscores,
at most one trailing underscore) is textually replaced with
"_classname__spam", where "classname" is the current class name with
any leading underscores stripped.

This doesn't guarantee privacy: an outside user can still deliberately
access the "_classname__spam" attribute, and private values are
visible in the object's "__dict__".  Many Python programmers never
bother to use private variable names at all.


我的类定义了 __del__ 但是在删除对象时没有调用它。
-------------------------------------------------

这有几个可能的原因。

The del statement does not necessarily call "__del__()" -- it simply
decrements the object's reference count, and if this reaches zero
"__del__()" is called.

If your data structures contain circular links (e.g. a tree where each
child has a parent reference and each parent has a list of children)
the reference counts will never go back to zero.  Once in a while
Python runs an algorithm to detect such cycles, but the garbage
collector might run some time after the last reference to your data
structure vanishes, so your "__del__()" method may be called at an
inconvenient and random time. This is inconvenient if you're trying to
reproduce a problem. Worse, the order in which object's "__del__()"
methods are executed is arbitrary.  You can run "gc.collect()" to
force a collection, but there *are* pathological cases where objects
will never be collected.

Despite the cycle collector, it's still a good idea to define an
explicit "close()" method on objects to be called whenever you're done
with them.  The "close()" method can then remove attributes that refer
to subobjects.  Don't call "__del__()" directly -- "__del__()" should
call "close()" and "close()" should make sure that it can be called
more than once for the same object.

Another way to avoid cyclical references is to use the "weakref"
module, which allows you to point to objects without incrementing
their reference count. Tree data structures, for instance, should use
weak references for their parent and sibling references (if they need
them!).

Finally, if your "__del__()" method raises an exception, a warning
message is printed to "sys.stderr".


如何获取给定类的所有实例的列表？
--------------------------------

Python不跟踪类（或内置类型）的所有实例。您可以对类的构造函数进行编程，
以通过保留每个实例的弱引用列表来跟踪所有实例。


为什么 "id()" 的结果看起来不是唯一的？
--------------------------------------

"id()" 返回一个整数，该整数在对象的生命周期内保证是唯一的。因为在
CPython中，这是对象的内存地址，所以经常发生在从内存中删除对象之后，下
一个新创建的对象被分配在内存中的相同位置。这个例子说明了这一点：

>>> id(1000) # doctest: +SKIP
13901272
>>> id(2000) # doctest: +SKIP
13901272

这两个id属于之前创建的不同整数对象，并在执行 "id()" 调用后立即删除。要
确保要检查其id的对象仍处于活动状态，请创建对该对象的另一个引用：

>>> a = 1000; b = 2000
>>> id(a) # doctest: +SKIP
13901272
>>> id(b) # doctest: +SKIP
13891296


模块
====


如何创建 .pyc 文件？
--------------------

When a module is imported for the first time (or when the source file
has changed since the current compiled file was created) a ".pyc" file
containing the compiled code should be created in a "__pycache__"
subdirectory of the directory containing the ".py" file.  The ".pyc"
file will have a filename that starts with the same name as the ".py"
file, and ends with ".pyc", with a middle component that depends on
the particular "python" binary that created it.  (See **PEP 3147** for
details.)

One reason that a ".pyc" file may not be created is a permissions
problem with the directory containing the source file, meaning that
the "__pycache__" subdirectory cannot be created. This can happen, for
example, if you develop as one user but run as another, such as if you
are testing with a web server.

Unless the "PYTHONDONTWRITEBYTECODE" environment variable is set,
creation of a .pyc file is automatic if you're importing a module and
Python has the ability (permissions, free space, etc...) to create a
"__pycache__" subdirectory and write the compiled module to that
subdirectory.

Running Python on a top level script is not considered an import and
no ".pyc" will be created.  For example, if you have a top-level
module "foo.py" that imports another module "xyz.py", when you run
"foo" (by typing "python foo.py" as a shell command), a ".pyc" will be
created for "xyz" because "xyz" is imported, but no ".pyc" file will
be created for "foo" since "foo.py" isn't being imported.

If you need to create a ".pyc" file for "foo" -- that is, to create a
".pyc" file for a module that is not imported -- you can, using the
"py_compile" and "compileall" modules.

The "py_compile" module can manually compile any module.  One way is
to use the "compile()" function in that module interactively:

   >>> import py_compile
   >>> py_compile.compile('foo.py')                 

This will write the ".pyc" to a "__pycache__" subdirectory in the same
location as "foo.py" (or you can override that with the optional
parameter "cfile").

You can also automatically compile all files in a directory or
directories using the "compileall" module.  You can do it from the
shell prompt by running "compileall.py" and providing the path of a
directory containing Python files to compile:

   python -m compileall .


如何找到当前模块名称？
----------------------

模块可以通过查看预定义的全局变量 "__name__" 找到自己的模块名称。如果它
的值为 "'__main__'" ，程序将作为脚本运行。通常，通过导入使用的许多模块
也提供命令行界面或自检，并且只在检查 "__name__" 之后，才执行之后的代码
:

   def main():
       print('Running test...')
       ...

   if __name__ == '__main__':
       main()


如何让模块相互导入？
--------------------

假设您有以下模块：

foo.py:

   from bar import bar_var
   foo_var = 1

bar.py:

   from foo import foo_var
   bar_var = 2

问题是解释器将执行以下步骤：

* 首先导入foo

* 创建用于foo的空全局变量

* foo被编译并开始执行

* foo 导入 bar

* 创建了用于bar 的空全局变量

* bar被编译并开始执行

* bar导入foo（这是一个空操作（no-op ），因为已经有一个名为foo的模块
  ）

* bar.foo_var = foo.foo_var

最后一步失败了，因为Python还没有解释foo，而foo的全局符号字典仍然是空的
。

当你使用 "import foo" ，然后尝试在全局代码中访问 "foo.foo_var" 时，会
发生同样的事情。

这个问题有（至少）三种可能的解决方法。

Guido van Rossum 建议避免使用 "from <module> import ..." ，并将所有代
码放在函数中。全局变量和类变量的初始化只能使用常量或内置函数。这意味着
导入模块中的所有内容都被引用为 "<module>.<name>" 。

Jim Roskind建议在每个模块中按以下顺序执行步骤：

* 导出（全局变量，函数和不需要导入基类的类）

* "导入" 声明

* 活动代码（包括从导入值初始化的全局变量）。

van Rossum不喜欢这种方法，因为导入出现在一个陌生的地方，但这种方法确实
有效。

Matthias Urlichs建议重构代码，以便首先不需要递归导入。

这些解决方案并不相互排斥。


__import__('x.y.z') 返回 <module 'x'>; 如何获取z?
-------------------------------------------------

考虑使用 "importlib" 中的函数 "import_module()" ：

   z = importlib.import_module('x.y.z')


当我编辑了导入过的模块并重新导入它时，这些变化没有显示出来。为什么会这样？
--------------------------------------------------------------------------

出于效率和一致性的原因，Python仅在第一次导入模块时读取模块文件。如果不
这么做，在一个由许多模块组成的程序中，每个模块都会导入相同的基本模块，
那么基本模块将被解析和重新解析多次。要强制重新读取已更改的模块，请执行
以下操作:

   import importlib
   import modname
   importlib.reload(modname)

警告：这种技术不是100％万无一失。特别是包含如下语句的模块

   from modname import some_objects

将继续使用旧版本的导入对象。如果模块包含类定义，则不会更新现有的类实例
以使用新的类定义。这可能导致以下矛盾行为:

   >>> import importlib
   >>> import cls
   >>> c = cls.C()                # Create an instance of C
   >>> importlib.reload(cls)
   <module 'cls' from 'cls.py'>
   >>> isinstance(c, cls.C)       # isinstance is false?!?
   False

如果打印出类对象的“标识”，问题的本质就会明确：

   >>> hex(id(c.__class__))
   '0x7352a0'
   >>> hex(id(cls.C))
   '0x4198d0'
