Python
Python can be viewed as an interpreted programming language, with a standard runtime and libraries, or as a programmable integration platform powering science, machine learning, and generative AI.
For many, machine learning and generative AI development, is synonymous with Python development.
Most development environments for machine learning and generative AI development includes
Python, even when targeting other platforms such as .Net or native code.
Since standard Python is, for now, an interpreted language, it doesn’t perform as well as compiled languages. This is a well known problem, and multiple projects have developed high performance solutions targeting native code generation, and/or compute kernels running on GPUs or NPUs.
Python code can be compiled into native code, which may increase runtime performance significantly, using Cython, a a static optimizing compiler for the Python language. With a little effort, programs compiled with Cython may perform as well as software developed in other languages targeting native compilation.
CPython added experimental JIT compilation in version 3.13, generating machine code from tier 2 internal intermediate representation (IR). This is off by default, and must be enabled when building CPython.
There are also production ready JIT compilers like Numba, PyTorch JIT, Triton supporting native machine code generation or enabling direct execution of AI, and other, workloads on the GPU.
Creating the glue between Python and natively compiled code can be tedious, but fortunately Cython can also be used to simplify this process, and make it less error prone.
Another, popular, option is to use the boost.python C++ library.
For rust interoperability there are PyO3, which can be used to write a native Python extension module in Rust, or to embed Python in a Rust binary.
There exists an extraordinary amount of Python packages, that can be used to perform most development tasks.