Using Python

Best Practice: Use a Python Virtual Environment

To avoid dependency conflicts and keep your environment clean, create and activate a Python virtual environment before installing any packages:

python3 -m venv venvsource venv/bin/activate

Install Dependencies

pip install llama-cpp-python pymilvus "pymilvus[model]"

Install Alith

python3 -m pip install alith -U

Set Environment Variables

For OpenAI/ChatGPT API:

export PRIVATE_KEY=<your wallet private key>
export OPENAI_API_KEY=<your openai api key>

For other OpenAI-compatible APIs (DeepSeek, Gemini, etc.):

export PRIVATE_KEY=<your wallet private key>
export LLM_API_KEY=<your api key>
export LLM_BASE_URL=<your api base url>

Step 1: Run the Inference Server

Note: The public address of the private key you expose to the inference server is the LAZAI_IDAO_ADDRESS. Once the inference server is running, the URL must be registered using the add_inference_node function in Alith. This can only be done by LazAI admins.

Local Development

For OpenAI/ChatGPT API:

For other OpenAI-compatible APIs (DeepSeek, Gemini, etc.):

Production Deployment on Phala TEE Cloud

For production-ready applications, deploy your inference server on Phala TEE Cloud arrow-up-right for enhanced security and privacy. Once deployed, you will receive an inference URL that needs to be registered using the add_inference_node function by LazAI admins.

You can also use the existing inference nodes.

Step 2: Request Inference via LazAI Client


Security & Privacy

  • Your data never leaves your control. Inference is performed in a privacy-preserving environment, using cryptographic settlement and secure computation.

  • Settlement headers ensure only authorized users and nodes can access your data for inference.

  • File ID links your inference request to the specific data you contributed, maintaining a verifiable chain of custody.

Last updated