Environment Variables

Passing secrets such as api-keys through environment variables in container deployments is a standard and recommended security setup. This setup eliminates the necessity of storing secrets in config files and enables integration with cloud key vaults in enterprise systems like k8s and other managed container services.

The gecholog configuration files /app/conf/ginit_config.json,/app/conf/gl_config.json, /app/conf/gui_config.json, /app/conf/nats-server.conf, /app/conf/nats2log_config.json, /app/conf/nats2file_config.json and /app/conf/tokencounter_config.json support using environment variables.

How it works

Environment variables are added to your gecholog configuration in this sequence.

  1. Use environment variable references ${YOUR_ENVIRONMENT_VARIABLE} in your config files
  2. Populate a local environment variables (Optional)
  3. Spin up the container using -e YOUR_ENVIRONMENT_VARIABLE=your_value or similar
  4. The gecholog services will read the config files
  5. The gecholog services will insert the environment variables from the container service in memory, the config files will still contain only references and no actual values

NOTE: Syntax in /app/conf/nats-server.conf is $NATS_TOKEN


Example for outbound LLM url

In this example we will show how the default settings for gecholog uses an environment variable to specify the outbound LLM url.

Let's have a look at the default /service/standard/ router configuration in the /app/conf/gl_config.json file. There is a reference to the environment variable AISERVICE_API_BASE in the outbound url for the router:

{
    "path": "/service/standard/",
    "ingress": {
        "headers": {
            "Content-Type": [
                "application/json"
            ]
        }
    },
    "outbound": {
        "url": "${AISERVICE_API_BASE}",
        "endpoint": "",
        "headers": {
            "Content-Type": [
                "application/json"
            ]
        }
    }
}

We set the environment variable AISERVICE_API_BASE when we start the container like this:

docker run -d --name gecholog -p 5380:5380 -p 8080:8080 -e GUI_SECRET=changeme -e AISERVICE_API_BASE=https://your.openai.azure.com/ gecholog/gecholog:latest
az container create --resource-group <RESOURCE_GROUP> --name gecholog --image gecholog/gecholog:latest --environment-variables AISERVICE_API_BASE=https://your.openai.azure.com/ --secure-environment-variables GUI_SECRET=changeme --dns-name-label gecholog-changeme --ports 5380 8080


When we set AISERVICE_API_BASE=https://your.openai.azure.com/ our instance of the gecholog container will route traffic on the /service/standard/ router to https://your.openai.azure.com/.

But, what happens if we don't set the environment variable AISERVICE_API_BASE when we start the container?

docker run -d --name gecholog -p 5380:5380 -p 8080:8080 -e GUI_SECRET=changeme gecholog/gecholog:latest
az container create --resource-group <RESOURCE_GROUP> --name gecholog --image gecholog/gecholog:latest --secure-environment-variables GUI_SECRET=changeme --dns-name-label gecholog-changeme --ports 5380 8080


If we inspect the container logs for service gl and filter using jq we can see what happens:

docker logs gecholog | jq --slurp '.[] | select(.service=="gl")'
az container logs --resource-group <RESOURCE_GROUP> --name gecholog --container-name gecholog | jq --slurp '.[] | select(.service=="gl")'


Example of output

{
  "time": "2024-02-20T13:14:51.53630666Z",
  "level": "WARN",
  "source": {
    "function": "main.setupConfig",
    "file": "/app/cmd/gl/main.go",
    "line": 2402
  },
  "msg": "configuration has rejected fields",
  "service": "gl",
  "rejected_fields": {
    "gl_config.Routers[1].Router.Outbound.Url": "required:",
    "gl_config.Routers[2].Router.Outbound.Url": "required:",
    "gl_config.Routers[3].Router.Ingress.Headers[Api-Key][0]": "required:",
    "gl_config.Routers[3].Router.Outbound.Headers[Api-Key][0]": "required:",
    "gl_config.Routers[3].Router.Outbound.Url": "required:"
  }
}

We can see that the routers [1], [2] and [3] are rejected due to missing Outbound.Url. The default configuration of gecholog uses AISERVICE_API_BASE to populate the Outbound.Url for the routers. So if we don't set that env variable the dependent routers will be disabled.

If we inspect the log further, we can in fact see that the only router that gets activated is the /echo/ router.

{
  "time": "2024-02-20T13:14:51.558818785Z",
  "level": "INFO",
  "source": {
    "function": "main.do",
    "file": "/app/cmd/gl/main.go",
    "line": 1793
  },
  "msg": "adding router",
  "service": "gl",
  "router": {
    "path": "/echo/",
    "ingress": "headers:map[]",
    "egress": "url:https://localhost endpoint: headers:map[Content-Type:[application/json]]"
  }
}

We can easily remove the dependence of environment variables in the configuration files like this:

{
    "path": "/service/standard/",
    "ingress": {
        "headers": {
            "Content-Type": [
                "application/json"
            ]
        }
    },
    "outbound": {
        "url": "https://your.openai.azure.com/",
        "endpoint": "",
        "headers": {
            "Content-Type": [
                "application/json"
            ]
        }
    }
}

Example for Api-Keys

Let's review how we can use gecholog to distribute custom api-keys to the users without sharing the real api-key for the LLM service.

In this scenario you want to

  1. Create a new custom api-key for the users. Let's set the environment variable GECHOLOG_API_KEY=v3ry4dv4nc30k3y
  2. We want to use our real api-key for the LLM service but not share it with the users, let's assume it is AISERVICE_API_KEY=l0ng4dv4nc304p1k3y

This is how you can configure the router /restricted/

{
    "path": "/restricted/",
    "ingress": {
        "headers": {
            "Content-Type": [
                "application/json"
            ],
            "Api-Key": [
                "${GECHOLOG_API_KEY}"
            ]
        }
    },
    "outbound": {
        "url": "${AISERVICE_API_BASE}",
        "endpoint": "",
        "headers": {
            "Content-Type": [
                "application/json"
            ],
            "Api-Key": [
                "${AISERVICE_API_KEY}"
            ]
        }
    }
}

In this configuration, requests to the /restricted/ router requires header Api-Key to be equal to the value of our environment variable GECHOLOG_API_KEY. If a valid API key has been provided, gecholog will forward the request but replace the Api-Key header with the value of AISERVICE_API_KEY. The /app/conf/gl_config.json does not contain any api-keys in clear text, only references to the environment variables.

Start the container

docker run -d --name gecholog -p 5380:5380 -p 8080:8080 -e GUI_SECRET=changeme -e AISERVICE_API_BASE=https://your.openai.azure.com/ -e GECHOLOG_API_KEY=v3ry4dv4nc30k3y -e AISERVICE_API_KEY=l0ng4dv4nc304p1k3y gecholog/gecholog:latest
az container create --resource-group <RESOURCE_GROUP> --name gecholog --image gecholog/gecholog:latest --environment-variables AISERVICE_API_BASE=https://your.openai.azure.com/ --secure-environment-variables GUI_SECRET=changeme GECHOLOG_API_KEY=v3ry4dv4nc30k3y AISERVICE_API_KEY=l0ng4dv4nc304p1k3y --dns-name-label gecholog-changeme --ports 5380 8080


Now you would use the GECHOLOG_API_KEY to authenticate when making a request

setx GECHOLOG_API_KEY "v3ry4dv4nc30k3y"              
setx DEPLOYMENT "your_azure_deployment"         
export GECHOLOG_API_KEY=v3ry4dv4nc30k3y      
export DEPLOYMENT=your_azure_deployment       


Make the request

curl -X POST ^
     -H "api-key: %GECHOLOG_API_KEY%" ^
     -H "Content-Type: application/json" ^
     -d "{\"messages\": [{\"role\": \"system\",\"content\": \"Assistant is a large language model trained by OpenAI.\"},{\"role\": \"user\",\"content\": \"Who are the founders of Microsoft?\"}],\"max_tokens\": 15}" ^
     http://localhost:5380/service/restricted/deployments/%DEPLOYMENT%/chat/completions?api-version=2023-12-01-preview
curl -X POST -H "api-key: $GECHOLOG_API_KEY" -H "Content-Type: application/json" -d '{
    "messages": [
      {
        "role": "system",
        "content": "Assistant is a large language model trained by OpenAI."
      },
      {
        "role": "user",
        "content": "Who are the founders of Microsoft?"
      }
    ],
    "max_tokens": 15
  }' "http://localhost:5380/restricted/openai/deployments/$DEPLOYMENT/chat/completions?api-version=2023-12-01-preview"


And receive a response like this

{
  "id": "chatcmpl-8GQ88W04Yx6Z6kCYLbnf2NzTXm2WO",
  "object": "chat.completion",
  "created": 1698924384,
  "model": "gpt-4",
  "choices": [
    {
      "index": 0,
      "finish_reason": "stop",
      "message": {
        "role": "assistant",
        "content": "\"Bill Gates and Paul Allen.\""
      }
    }
  ],
  "usage": {
    "prompt_tokens": 37,
    "completion_tokens": 7,
    "total_tokens": 44
  }
}

Using local environment variables

The example above is to illustrate the mechanics, a more professional approach would likely be to use local environment when provisioning the gecholog container.

setx GECHOLOG_API_KEY "v3ry4dv4nc30k3y"              
setx AISERVICE_API_KEY "l0ng4dv4nc304p1k3y"        
setx GUI_SECRET "changeme"   
export GECHOLOG_API_KEY=v3ry4dv4nc30k3y      
export AISERVICE_API_KEY=l0ng4dv4nc304p1k3y    
export GUI_SECRET=changeme   


Windows

docker run -d --name gecholog -p 5380:5380 -p 8080:8080 -e GUI_SECRET=%GUI_SECRET%-e AISERVICE_API_BASE=https://your.openai.azure.com/ -e GECHOLOG_API_KEY=%GECHOLOG_API_KEY% -e AISERVICE_API_KEY=%AISERVICE_API_KEY% gecholog/gecholog:latest
az container create --resource-group <RESOURCE_GROUP> --name gecholog --image gecholog/gecholog:latest --environment-variables AISERVICE_API_BASE=https://your.openai.azure.com/ --secure-environment-variables GUI_SECRET=%GUI_SECRET% GECHOLOG_API_KEY=%GECHOLOG_API_KEY% AISERVICE_API_KEY=%AISERVICE_API_KEY% --dns-name-label gecholog-changeme --ports 5380 8080


MacOs/Linux

docker run -d --name gecholog -p 5380:5380 -p 8080:8080 -e GUI_SECRET=$GUI_SECRET -e AISERVICE_API_BASE=https://your.openai.azure.com/ -e GECHOLOG_API_KEY=$GECHOLOG_API_KEY -e AISERVICE_API_KEY=$AISERVICE_API_KEY gecholog/gecholog:latest
az container create --resource-group <RESOURCE_GROUP> --name gecholog --image gecholog/gecholog:latest --environment-variables AISERVICE_API_BASE=https://your.openai.azure.com/ --secure-environment-variables GUI_SECRET=$GUI_SECRET GECHOLOG_API_KEY=$GECHOLOG_API_KEY AISERVICE_API_KEY=$AISERVICE_API_KEY --dns-name-label gecholog-changeme --ports 5380 8080


Web interface and environment variables

The web interface will run validation in the same context with the same environment variables as the rest of the container. This means that if an environment variable is set, the field for which it is used will be validated based on the value of the environment variable. Example:

Gecholog.ai Router Configuration and Validation

We can see that the AISERVICE_API_BASE is set and that the value is valid for the Outbound.Url field. However neither GECHOLOG_API_KEY nor AISERVICE_API_KEY are set so the validation process marks them as missing/required.

NATS_TOKEN and GUI_SECRET

The environment variables NATS_TOKEN and GUI_SECRET are required to protect the access to the nats service bus and web interface. The gecholog container startup process will check if these two environment variables are set, and if not it will generate a random NATS_TOKEN and a random GUI_SECRET. However this randomized variables will not be accessible, so connecting external processors under this setup or logging into the web interface is not possible. For any scenario where you want to connect a custom processor, always set the environment variable NATS_TOKEN to your own secure secret. For any scenario where you want to login to the web interface, set the environment variable GUI_SECRET.