It is found that if python code is run normally and then if it is run in a python function, it runs faster in the latter case. I want to know why python code runs faster in a function.
It is generally found that it is faster to store local variables than global variables in a python function. This can be explained as under.
Aside from local/global variable store times, opcode prediction makes the function faster.
CPython is the original Python implementation we download from Python.org. It is called CPython to distinguish it from later Python implementations, and to distinguish the implementation of the language engine from the Python programming language itself.
CPython happens to be implemented in C language. CPython compiles our python code into bytecode and interprets that bytecode in an evaluation loop.
When a function is compiled, the local variables are stored in a fixed-size array (not a dict) and variable names are assigned to indexes. This is possible because you can't dynamically add local variables to a function. Then retrieving a local variable is literally a pointer lookup into the list and a refcount increase on the PyObject which is inconsequential.
Compare this to a global lookup, which is a true dict search involving a hash and so on. Incidentally, this is why you need to specify global if you want a variable to be global: if you ever assign to a variable inside a scope, the compiler will issue STORE_FASTs for its access unless you tell it not to.
By the way, global lookups are still pretty optimised. Attribute lookups are the really slow ones!