Zum Inhalt

Template

template

MODULE DESCRIPTION
main

Main module of the application.

src

Source code of the microservicetemplate containing core components and utilities.

main

Main module of the application.

This module serves as the entry point for the program. It imports necessary modules, sets up any initial configuration or data structures, and possibly defines main functions or classes that are used throughout the application.

src

Source code of the microservicetemplate containing core components and utilities.

MODULE DESCRIPTION
app

Initialize the app.

endpoints

Define all endpoints of the FastAPI app.

models

Data model classes for loading and validation API and configuration parameters.

service_logic

Central service logic. Please rename me!

settings

Load all settings from a central place, not hidden in utils.

utils

Utils functions for logging, LLM availability check and configuration processing.

app

Initialize the app.

FUNCTION DESCRIPTION
lifespan

Sets up a scheduler and updates available llms.

lifespan async
lifespan(_app)

Sets up a scheduler and updates available llms.

This lifespan function is started on startup of FastAPI. The first part - till yield is executed on startup and initializes a scheduler to regulary check the LLM-API. The second part is executed on shutdown and is used to clean up the scheduler.

The available LLMs - i.e. the LLMs where API-checks passed - are cached in FastAPI state object as app.state.available_llms.

PARAMETER DESCRIPTION
_app

fastapi.applications.FastAPI object

TYPE: FastAPI

Source code in docs/microservices/template/src/app.py
@asynccontextmanager
async def lifespan(_app: FastAPI) -> AsyncGenerator[None, None]:
    """Sets up a scheduler and updates available llms.

    This lifespan function is started on startup of FastAPI. The first part
    - till `yield` is executed on startup and initializes a scheduler to regulary
    check the LLM-API. The second part is executed on shutdown and is used to
    clean up the scheduler.

    The available LLMs - i.e. the LLMs where API-checks passed - are cached in
    FastAPI state object as `app.state.available_llms`.

    Args:
        _app (FastAPI): fastapi.applications.FastAPI object
    """

    async def update_llm_state() -> None:
        """Store available LLMs in FastAPI app state."""
        _app.state.available_llms = await get_available_llms()

    await update_llm_state()

    # setup a scheduler
    scheduler = AsyncIOScheduler()
    scheduler.add_job(
        update_llm_state,
        "interval",
        seconds=settings.check_llm_api_interval_in_s,
    )
    scheduler.start()

    yield

    # cleanup
    scheduler.shutdown()

endpoints

Define all endpoints of the FastAPI app.

FUNCTION DESCRIPTION
example

Example Endpoint.

get_llms

Return model information of available LLMs.

health

Return a health check message.

example async
example(params)

Example Endpoint.

PARAMETER DESCRIPTION
params

input parameter defined as pydantic class

TYPE: ExampleInput

RETURNS DESCRIPTION
ExampleOutput

Simple output as defined in pydantic output_models.

Source code in docs/microservices/template/src/endpoints.py
@router.post(
    "/example_endpoint",
    response_model=ExampleOutput,
    summary="Example Endpoint used to return something",
    description=(
        "Performs example response.\n\n"
        "The endpoint returns a single list containing the prompt map.\n\n"
    ),
    openapi_extra={
        "requestBody": {
            "content": {
                "application/json": {
                    "examples": ExampleInput.model_config["json_schema_extra"][
                        "openapi_examples"
                    ],
                }
            },
        }
    },
    responses={
        200: {
            "description": "Successful response",
            "content": {
                "application/json": {
                    "examples": ExampleOutput.model_config["json_schema_extra"][
                        "openapi_examples"
                    ],
                },
            },
        },
        400: {"description": "Invalid language model."},
    },
)
async def example(params: ExampleInput) -> ExampleOutput:
    """Example Endpoint.

    Args:
        params (ExampleInput): input parameter defined as pydantic class

    Returns:
        Simple output as defined in pydantic output_models.
    """
    logger.debug("This is a logging message.")

    return do_stuff(params)
get_llms async
get_llms(request)

Return model information of available LLMs.

PARAMETER DESCRIPTION
request

Request-Data.

TYPE: Request

RETURNS DESCRIPTION
list[dict]

List of model information.

Source code in docs/microservices/template/src/endpoints.py
@router.get(
    "/llms",
    summary="List available language models.",
    description=("Returns a list of available language models (LLMs).\n\n"),
    responses={
        200: {
            "description": "List of available LLMs.",
            "content": {
                "application/json": {
                    "example": [
                        {
                            "label": "test_model:mock",
                            "is_remote": False,
                            "name": "test_model_mock",
                        },
                    ]
                }
            },
        },
        500: {"description": "Internal server error accessing microservice"},
    },
)
async def get_llms(request: Request) -> list[dict]:
    """Return model information of available LLMs.

    Args:
        request (Request): Request-Data.

    Returns:
        List of model information.
    """
    app = request.app  # indirectly access the FastAPI app object
    return app.state.available_llms
health async
health()

Return a health check message.

RETURNS DESCRIPTION
dict[str, str]

The health check message as a dictonary.

Source code in docs/microservices/template/src/endpoints.py
@router.get(
    "/",
    summary="Health check endpoint",
    description=(
        "Returns a simple message indicating that the Microservicetemplate service is running.\n\n"
        "Use this endpoint to verify that the service is alive and responsive."
    ),
    responses={
        200: {
            "description": "Health check successful",
            "content": {
                "application/json": {
                    "example": {"status": "Microservicetemplate is running"}
                }
            },
        },
        500: {"description": "Internal server error"},
    },
)
@router.get(
    "/health",
    summary="Health check endpoint",
    description=(
        "Returns a simple message indicating that the Microservicetemplate service is running.\n\n"
        "Use this endpoint to verify that the service is alive and responsive."
    ),
    responses={
        200: {
            "description": "Health check successful",
            "content": {
                "application/json": {
                    "example": {"status": "Microservicetemplate is running"}
                }
            },
        },
        500: {"description": "Internal server error"},
    },
)
async def health() -> dict[str, str]:
    """Return a health check message.

    Returns:
        The health check message as a dictonary.
    """
    return {"message": f"{settings.service_name} is running"}

models

Data model classes for loading and validation API and configuration parameters.

MODULE DESCRIPTION
api_input

pydantic Models for API input parameters.

api_output

pydantic Models for API output parameters.

general

Load and check Settings from yml.

llms

pydantic model for LLM config.

api_input

pydantic Models for API input parameters.

CLASS DESCRIPTION
ExampleInput

Example input model that is used for something.

ExampleInput

Bases: BaseModel

Example input model that is used for something.

ATTRIBUTE DESCRIPTION
model_name

Model to use.

TYPE: str

text

Text to parse.

TYPE: str

Source code in docs/microservices/template/src/models/api_input.py
class ExampleInput(BaseModel):
    """Example input model that is used for something.

    Attributes:
        model_name (str): Model to use.
        text (str): Text to parse.
    """

    model_name: str
    text: str

    model_config = ConfigDict(
        json_schema_extra={
            "openapi_examples": {
                "standard": {
                    "summary": "Simple example input",
                    "description": "Simple example input with model name and text.",
                    "value": {
                        "model_name": "test_model_mock",
                        "request_timestamp": "This is a text example.",
                    },
                }
            }
        }
    )
api_output

pydantic Models for API output parameters.

CLASS DESCRIPTION
ExampleOutput

Example input model that is used for something.

ExampleOutput

Bases: BaseModel

Example input model that is used for something.

ATTRIBUTE DESCRIPTION
prompt_map_in_cfg

Prompt_map to use from config.

TYPE: str

metadata

Data to pass on.

TYPE: AnyHttpUrl

Source code in docs/microservices/template/src/models/api_output.py
class ExampleOutput(BaseModel):
    """Example input model that is used for something.

    Attributes:
        prompt_map_in_cfg (str): Prompt_map to use from config.
        metadata (AnyHttpUrl): Data to pass on.
    """

    prompt_map_in_cfg: str
    metadata: AnyHttpUrl

    model_config = ConfigDict(
        json_schema_extra={
            "openapi_examples": {
                "simple": {
                    "prompt_map_in_cfg": "base_assistant",
                    "metadata": "http://ollama-mock:11434/v1",
                },
            }
        }
    )
general

Load and check Settings from yml.

CLASS DESCRIPTION
ActiveLLMs

Selects the available models for the respective use cases.

InterServiceCommunication

Configuration of all microservice communications.

LogLevel

Enum class specifying possible log levels.

PostConfig

Configuration for async_post request to other microservices (e.g. parser).

ServiceEndpoints

URLs to required dependent services (e.g. parser).

Settings

General Settings for the service.

ActiveLLMs

Bases: BaseModel

Selects the available models for the respective use cases.

ATTRIBUTE DESCRIPTION
model_config

Used to ignore other services, which are defined in the config.

TYPE: ConfigDict

microservicetemplate

List containing available models for microservicetemplate. It may contain only a subset of all models in llm_models.yml.

TYPE: list[str]

Source code in docs/microservices/template/src/models/general.py
class ActiveLLMs(BaseModel):
    """Selects the available models for the respective use cases.

    Attributes:
        model_config (ConfigDict): Used to ignore other services, which are defined in the config.
        microservicetemplate (list[str]): List containing available models for microservicetemplate.
            It may contain only a subset of all models in llm_models.yml.
    """

    # if there are more services defined in the config: just ignore them
    model_config = ConfigDict(extra="ignore")

    microservicetemplate: list[str]
InterServiceCommunication

Bases: BaseModel

Configuration of all microservice communications.

PARAMETER DESCRIPTION
parser

Default configuration for respective microservice.

TYPE: PostConfig

Source code in docs/microservices/template/src/models/general.py
class InterServiceCommunication(BaseModel):
    """Configuration of all microservice communications.

    Args:
        parser (PostConfig): Default configuration for respective microservice.
    """

    other_service_1: PostConfig = PostConfig()
    other_service_2: PostConfig = PostConfig(timeout_in_s=600)
LogLevel

Bases: StrEnum

Enum class specifying possible log levels.

Source code in docs/microservices/template/src/models/general.py
class LogLevel(StrEnum):
    """Enum class specifying possible log levels."""

    CRITICAL = "CRITICAL"
    ERROR = "ERROR"
    WARNING = "WARNING"
    INFO = "INFO"
    DEBUG = "DEBUG"

    @classmethod
    def _missing_(cls, value: object) -> None:
        """Convert strings to uppercase and recheck for existence."""
        if isinstance(value, str):
            value = value.upper()
            for level in cls:
                if level == value:
                    return level
        return None
PostConfig

Bases: BaseModel

Configuration for async_post request to other microservices (e.g. parser).

The default values in this class can be overwritten by those values stated in configs/general.yml.

ATTRIBUTE DESCRIPTION
model_config

Used to ignore other services, which are defined in the config.

TYPE: ConfigDict

max_attempts

Maximum number of request attempts before returning status code 424.

TYPE: PositiveInt

timeout_in_s

Maximum waiting duration before timeout (in seconds).

TYPE: PositiveInt

Source code in docs/microservices/template/src/models/general.py
class PostConfig(BaseModel):
    """Configuration for async_post request to other microservices (e.g. parser).

    The default values in this class can be overwritten by those values stated in configs/general.yml.

    Attributes:
        model_config (ConfigDict): Used to ignore other services, which are defined in the config.
        max_attempts (PositiveInt): Maximum number of request attempts before returning status code 424.
        timeout_in_s (PositiveInt):  Maximum waiting duration before timeout (in seconds).
    """

    model_config = ConfigDict(extra="ignore")
    max_attempts: PositiveInt = 3
    timeout_in_s: PositiveInt = 200
ServiceEndpoints

Bases: BaseModel

URLs to required dependent services (e.g. parser).

ATTRIBUTE DESCRIPTION
model_config

Used to ignore other services, which are defined in the config.

TYPE: ConfigDict

other_service_1

URL to connect to required other_service_1.

TYPE: AnyHttpUrl

other_service_2

URL to connect to required other_service_2.

TYPE: AnyHttpUrl

Source code in docs/microservices/template/src/models/general.py
class ServiceEndpoints(BaseModel):
    """URLs to required dependent services (e.g. parser).

    Attributes:
        model_config (ConfigDict): Used to ignore other services, which are defined in the config.
        other_service_1 (AnyHttpUrl): URL to connect to required other_service_1.
        other_service_2 (AnyHttpUrl): URL to connect to required other_service_2.
    """

    # if there are more services defined in the config: just ignore them
    model_config = ConfigDict(extra="ignore")

    other_service_1: AnyHttpUrl
    other_service_2: AnyHttpUrl
Settings

Bases: BaseModel

General Settings for the service.

ATTRIBUTE DESCRIPTION
model_config

Ignores extra configuration keys not used by this service.

TYPE: ConfigDict

service_name

Name of the current service (e.g., "my-service").

TYPE: str

n_uvicorn_workers

Number of parallel Uvicorn worker processes.

TYPE: PositiveInt

service_endpoints

URLs to required dependent services (e.g. other_service_1).

TYPE: ServiceEndpoints

active_llms

Configuration of LLMs available for different use cases.

TYPE: ActiveLLMs

log_level

Minimum logging level for general logs.

TYPE: LogLevel

log_file_max_bytes

Maximum size (in bytes) of a single log file before rotation.

TYPE: PositiveInt

log_file_backup_count

Number of rotated log files to retain.

TYPE: PositiveInt

log_file

File path where logs will be written.

TYPE: FilePath

check_llm_api_interval_in_s

Interval (in seconds) to check LLM API health.

TYPE: PositiveInt

inter_service_communictaion

Configuration of communication with other services.

TYPE: InterServiceCommunication

METHOD DESCRIPTION
ensure_log_dir

Create the log directory after validation.

Source code in docs/microservices/template/src/models/general.py
class Settings(BaseModel):
    """General Settings for the service.

    Attributes:
        model_config (ConfigDict): Ignores extra configuration keys not used by this service.
        service_name (str): Name of the current service (e.g., "my-service").
        n_uvicorn_workers (PositiveInt): Number of parallel Uvicorn worker processes.
        service_endpoints (ServiceEndpoints): URLs to required dependent services (e.g. other_service_1).
        active_llms (ActiveLLMs): Configuration of LLMs available for different use cases.
        log_level (LogLevel): Minimum logging level for general logs.
        log_file_max_bytes (PositiveInt): Maximum size (in bytes) of a single log file before rotation.
        log_file_backup_count (PositiveInt): Number of rotated log files to retain.
        log_file (FilePath): File path where logs will be written.
        check_llm_api_interval_in_s (PositiveInt): Interval (in seconds) to check LLM API health.
        inter_service_communictaion (InterServiceCommunication): Configuration of communication with other services.
    """

    model_config = ConfigDict(extra="ignore")

    service_name: str = "Microservicetemplate"
    service_description: str = (
        "Microservice template for FastAPI-Applications using LLMs."
    )

    n_uvicorn_workers: PositiveInt = 1

    service_endpoints: ServiceEndpoints

    active_llms: ActiveLLMs

    log_level: LogLevel = LogLevel.INFO
    log_file_max_bytes: PositiveInt = 1 * 1024 * 1024
    log_file_backup_count: PositiveInt = 3
    log_file: FilePath = Path("/microservicetemplate/logs/log")

    # interval for checking all LLM APIs (seconds)
    check_llm_api_interval_in_s: PositiveInt = 120

    inter_service_communication: InterServiceCommunication = InterServiceCommunication()

    @model_validator(mode="after")
    def ensure_log_dir(self) -> "Settings":
        """Create the log directory after validation."""
        self.log_file.parent.mkdir(parents=True, exist_ok=True)
        return self
ensure_log_dir
ensure_log_dir()

Create the log directory after validation.

Source code in docs/microservices/template/src/models/general.py
@model_validator(mode="after")
def ensure_log_dir(self) -> "Settings":
    """Create the log directory after validation."""
    self.log_file.parent.mkdir(parents=True, exist_ok=True)
    return self
llms

pydantic model for LLM config.

CLASS DESCRIPTION
APIAuth

Defines authentification settings for LLM.

LLM

This pydantic class defines the basic structure of a LLM config.

LLMAPI

Defines API-Connection to LLM.

LLMConfig

Base class as loaded from model_configs.yml.

LLMInference

Defines Inference parameters.

LLMPromptConfig

Defines the structure of a LLM prompt configuration.

LLMPromptMaps

Defines complete LLM prompt config.

LLMPrompts

Defines the selectable LLM Prompts.

APIAuth

Bases: BaseModel

Defines authentification settings for LLM.

ATTRIBUTE DESCRIPTION
type

Either 'token' or 'basic_auth'.

TYPE: Literal

secret_path

File path where the api token or credentials are stored.

TYPE: FilePath

METHOD DESCRIPTION
get_auth_header

Generate auth part of header for http request.

Source code in docs/microservices/template/src/models/llms.py
class APIAuth(BaseModel):
    """Defines authentification settings for LLM.

    Attributes:
        type (Literal): Either 'token' or 'basic_auth'.
        secret_path (FilePath): File path where the api token or credentials are stored.
    """

    type: Literal["token", "basic_auth"]
    secret_path: FilePath

    @property
    def secret(self) -> SecretStr:
        """Load secret variable as 'secret'."""
        with open(self.secret_path) as file:
            return SecretStr(file.read().strip())

    def get_auth_header(self) -> str:
        """Generate auth part of header for http request.

        Returns:
            The auth header
        """
        auth_header = ""

        if self.type == "basic_auth":
            auth_header = f"Basic {base64.b64encode(self.secret.get_secret_value().encode()).decode()}"
        elif self.type == "token":
            auth_header = f"Bearer {self.secret.get_secret_value()}"

        return auth_header
secret property
secret

Load secret variable as 'secret'.

get_auth_header
get_auth_header()

Generate auth part of header for http request.

RETURNS DESCRIPTION
str

The auth header

Source code in docs/microservices/template/src/models/llms.py
def get_auth_header(self) -> str:
    """Generate auth part of header for http request.

    Returns:
        The auth header
    """
    auth_header = ""

    if self.type == "basic_auth":
        auth_header = f"Basic {base64.b64encode(self.secret.get_secret_value().encode()).decode()}"
    elif self.type == "token":
        auth_header = f"Bearer {self.secret.get_secret_value()}"

    return auth_header
LLM

Bases: BaseModel

This pydantic class defines the basic structure of a LLM config.

ATTRIBUTE DESCRIPTION
label

Human-readable model name that can be presented to users.

TYPE: str

model

Model name which is used in API call, e.g. ollama tag.

TYPE: str

prompt_map

Prompt map name to load LLMPromptMaps from.

TYPE: str

is_remote

Is this LLM hosted at an external API?

TYPE: bool

context_length

Model's context length.

TYPE: PositiveInt

api

API information.

TYPE: LLMAPI

inference

Inference parameters.

TYPE: LLMInference

prompt_config

Prompts.

TYPE: LLMPromptConfig

Source code in docs/microservices/template/src/models/llms.py
class LLM(BaseModel):
    """This pydantic class defines the basic structure of a LLM config.

    Attributes:
        label (str): Human-readable model name that can be presented to users.
        model (str): Model name which is used in API call, e.g. ollama tag.
        prompt_map (str): Prompt map name to load LLMPromptMaps from.
        is_remote (bool): Is this LLM hosted at an external API?
        context_length (PositiveInt): Model's context length.
        api (LLMAPI): API information.
        inference (LLMInference): Inference parameters.
        prompt_config (LLMPromptConfig): Prompts.
    """

    label: str
    model: str
    prompt_map: str
    is_remote: bool
    api: LLMAPI
    inference: LLMInference
    prompt_config: LLMPromptConfig = None
LLMAPI

Bases: BaseModel

Defines API-Connection to LLM.

ATTRIBUTE DESCRIPTION
url

URL to llm API.

TYPE: AnyHttpUrl

auth

authentication setting for llm API; optional

TYPE: APIAuth | None

Source code in docs/microservices/template/src/models/llms.py
class LLMAPI(BaseModel):
    """Defines API-Connection to LLM.

    Attributes:
       url (AnyHttpUrl): URL to llm API.
       auth (APIAuth | None): authentication setting for llm API; optional
    """

    url: AnyHttpUrl
    auth: APIAuth | None = None
LLMConfig

Bases: BaseModel

Base class as loaded from model_configs.yml.

ATTRIBUTE DESCRIPTION
model_config

Used to ignore other services, which are defined in the config.

TYPE: ConfigDict

microservicetemplate

Dictionary containing a name and definition of LLMs's available for microservicetemplate.

TYPE: dict[str, LLM]

Source code in docs/microservices/template/src/models/llms.py
class LLMConfig(BaseModel):
    """Base class as loaded from model_configs.yml.

    Attributes:
        model_config (ConfigDict): Used to ignore other services, which are defined in the config.
        microservicetemplate (dict[str, LLM]): Dictionary containing a name and definition of LLMs's
            available for microservicetemplate.
    """

    # if there are more services defined in the config: just ignore them
    model_config = ConfigDict(extra="ignore")

    microservicetemplate: dict[str, LLM]

    def __iter__(self) -> Iterator[str]:
        """Get 'keys' for automatic merge with i.e. LLMPromptConfig."""
        return iter(self.__dict__.keys())

    def __getitem__(self, service: str) -> dict[str, LLM]:
        """Get all LLMs for a given service (e.g. "chat", "rag").

        Args:
            service (str): The service name (e.g., "chat", "rag").

        Returns:
            All configered LLMs for the given service.
        """
        return self.__getattribute__(service)
LLMInference

Bases: BaseModel

Defines Inference parameters.

ATTRIBUTE DESCRIPTION
temperature

Randomness / variation of the output High values indicate more creativity.

TYPE: PositiveFloat | None

max_tokens

Maximum number of tokens of the generated response.

TYPE: PositiveInt | None

top_p

Threshold for sampling only from the most likely tokens.

TYPE: PositiveFloat | None

Source code in docs/microservices/template/src/models/llms.py
class LLMInference(BaseModel):
    """Defines Inference parameters.

    Attributes:
        temperature (PositiveFloat | None): Randomness / variation of the output High values indicate more creativity.
        max_tokens (PositiveInt | None): Maximum number of tokens of the generated response.
        top_p (PositiveFloat | None): Threshold for sampling only from the most likely tokens.
    """

    temperature: PositiveFloat | None = 0.7
    max_new_tokens: PositiveInt | None = 2048
    top_p: PositiveFloat | None = 0.7
LLMPromptConfig

Bases: BaseModel

Defines the structure of a LLM prompt configuration.

ATTRIBUTE DESCRIPTION
model_config

Used to ignore other services, which are defined in the config.

TYPE: ConfigDict

system

System prompts.

TYPE: LLMPrompts

Source code in docs/microservices/template/src/models/llms.py
class LLMPromptConfig(BaseModel):
    """Defines the structure of a LLM prompt configuration.

    Attributes:
        model_config (ConfigDict): Used to ignore other services, which are defined in the config.
        system (LLMPrompts): System prompts.
    """

    # if there are more prompt types defined that are not used in this service: just ignore them
    model_config = ConfigDict(extra="ignore")

    system: LLMPrompts
LLMPromptMaps

Bases: BaseModel

Defines complete LLM prompt config.

ATTRIBUTE DESCRIPTION
model_config

Used to ignore other services, which are defined in the config.

TYPE: ConfigDict

microservicetemplate

Dictionary containing a name and prompts of LLMs's available for microservicetemplate.

TYPE: dict[str, LLMPromptConfig]

Source code in docs/microservices/template/src/models/llms.py
class LLMPromptMaps(BaseModel):
    """Defines complete LLM prompt config.

    Attributes:
        model_config (ConfigDict): Used to ignore other services, which are defined in the config.
        microservicetemplate (dict[str, LLMPromptConfig]): Dictionary containing a name and prompts of LLMs's
            available for microservicetemplate.
    """

    # if there are more services defined in the config: just ignore them
    model_config = ConfigDict(extra="ignore")

    microservicetemplate: dict[str, LLMPromptConfig]

    def __iter__(self) -> Iterator[str]:
        """Get 'keys' for automatic merge with i.e. LLMConfig."""
        return iter(self.__dict__.keys())
LLMPrompts

Bases: BaseModel

Defines the selectable LLM Prompts.

ATTRIBUTE DESCRIPTION
model_config

Used to ignore other services, which are defined in the config.

TYPE: ConfigDict

simplify

Prompt for model.

TYPE: str

Source code in docs/microservices/template/src/models/llms.py
class LLMPrompts(BaseModel):
    """Defines the selectable LLM Prompts.

    Attributes:
        model_config (ConfigDict): Used to ignore other services, which are defined in the config.
        simplify (str): Prompt for model.
    """

    # if there are more prompts defined that are not used in this service: just ignore them
    model_config = ConfigDict(extra="ignore")

    simplify: str = ""

service_logic

Central service logic. Please rename me!

FUNCTION DESCRIPTION
do_stuff

Simple processing as an example.

do_stuff
do_stuff(data_input)

Simple processing as an example.

PARAMETER DESCRIPTION
data_input

Date to process.

TYPE: ExampleInput

RETURNS DESCRIPTION
ExampleOutput

If possible: prompt_map for data_input.model_name.

Source code in docs/microservices/template/src/service_logic.py
def do_stuff(data_input: ExampleInput) -> ExampleOutput:
    """Simple processing as an example.

    Arguments:
        data_input (ExampleInput): Date to process.

    Returns:
        If possible: prompt_map for data_input.model_name.
    """
    model_to_check = data_input.model_name

    try:
        prompt_map = llm_config.microservicetemplate[model_to_check].prompt_map
        out = ExampleOutput(
            prompt_map_in_cfg=prompt_map,
            metadata=llm_config.microservicetemplate["test_model_mock"].api.url,
        )
    except KeyError as e:
        error_msg = f"Invalid 'model_name': '{model_to_check}'."
        logger.error(f"{error_msg} Exception: {e}")
        raise HTTPException(
            status_code=status.HTTP_400_BAD_REQUEST,
            detail=error_msg,
        ) from e

    return out

settings

Load all settings from a central place, not hidden in utils.

utils

Utils functions for logging, LLM availability check and configuration processing.

MODULE DESCRIPTION
base_logger

Set up the root logger for the entire application. This logger will log messages to the console and a file.

check_model_api_availability

This module provides functions to check LLM-APIs for availability.

process_configs

Methods to load and config and start checks of config integrity.

base_logger

Set up the root logger for the entire application. This logger will log messages to the console and a file.

FUNCTION DESCRIPTION
setup_logger

Initialize the logger with the desired log level and add handlers.

setup_logger
setup_logger()

Initialize the logger with the desired log level and add handlers.

Sets up the root logger, which all other loggers inherit from. Adds file, console and exit handlers to the logger and sets the format.

Source code in docs/microservices/template/src/utils/base_logger.py
def setup_logger() -> None:
    """Initialize the logger with the desired log level and add handlers.

    Sets up the root logger, which all other loggers inherit from.
    Adds file, console and exit handlers to the logger and sets the format.
    """
    # root logger, all other loggers inherit from this
    logger = logging.getLogger()

    # create different handlers for log file and console
    file_handler = logging.handlers.RotatingFileHandler(
        filename=settings.log_file,
        maxBytes=settings.log_file_max_bytes,
        backupCount=settings.log_file_backup_count,
    )
    console_handler = logging.StreamHandler()

    # define log format and set for each handler
    formatter = logging.Formatter(
        fmt="%(asctime)s - %(levelname)8s - %(module)s - %(funcName)s: %(message)s",
        datefmt="%Y-%m-%d %H:%M:%S%z",
    )
    file_handler.setFormatter(formatter)
    console_handler.setFormatter(formatter)

    # add handlers to the logger
    logger.addHandler(file_handler)
    logger.addHandler(console_handler)

    logger.setLevel(settings.log_level)
check_model_api_availability

This module provides functions to check LLM-APIs for availability.

To check a certain LLM use await check_model_api(llm). To get all LLMs that are activated in configs/general.yml, use await get_available_llms().

FUNCTION DESCRIPTION
get_available_llms

Returns a list of available LLMs.

is_llm_available

Check if requested model is available via given API.

get_available_llms async
get_available_llms()

Returns a list of available LLMs.

RETURNS DESCRIPTION
list[dict[str, str]]

List of available LLMs with selected infos.

Source code in docs/microservices/template/src/utils/check_model_api_availability.py
async def get_available_llms() -> list[dict[str, str]]:
    """Returns a list of available LLMs.

    Returns:
        List of available LLMs with selected infos.
    """
    available_llms = []

    # iterate over model_groups (services), i.e. chat, RAG, embedding, ...
    for model_group_key in llm_config:
        logger.debug(f"Checking APIs for {model_group_key}-LLMs.")
        model_group = llm_config[model_group_key]

        for llm_name, llm in model_group.items():
            logger.debug(f"Checking availability of {llm_name}")
            if await is_llm_available(
                llm.api,
                llm_model=llm.model,
                llm_id=llm_name,
            ):
                llm_dict = llm.model_dump(include=["label", "is_remote"])
                llm_dict["name"] = llm_name

                available_llms.append(llm_dict)

    return available_llms
is_llm_available async
is_llm_available(llm_api, llm_model, llm_id, timeout_in_s=10)

Check if requested model is available via given API.

Availability is checked by sending a GET request to the /v1/models endpoint of the llm API. The LLM is considered available if llm_model is in the response. This requires an API that conforms with the OpenAI standard.

PARAMETER DESCRIPTION
llm_api

The LLMAPI instance to check

TYPE: LLMAPI

llm_model

Name of the LLM as used in the llm API (cf. "model" in the config file)

TYPE: str

llm_id

ID of the LLM as used in the config file as reference. Only used for logging

TYPE: str

timeout_in_s

Http timeout in seconds; defaults to 10

TYPE: int DEFAULT: 10

RETURNS DESCRIPTION
bool

True if the LLM is available, False if not.

Source code in docs/microservices/template/src/utils/check_model_api_availability.py
async def is_llm_available(
    llm_api: LLMAPI,
    llm_model: str,
    llm_id: str,
    timeout_in_s: int = 10,
) -> bool:
    """Check if requested model is available via given API.

    Availability is checked by sending a GET request to the /v1/models endpoint of the llm API. The LLM is considered
    available if llm_model is in the response. This requires an API that conforms with the OpenAI standard.

    Args:
        llm_api (LLMAPI): The LLMAPI instance to check
        llm_model (str): Name of the LLM as used in the llm API (cf. "model" in the config file)
        llm_id (str): ID of the LLM as used in the config file as reference. Only used for logging
        timeout_in_s (int): Http timeout in seconds; defaults to 10

    Returns:
        True if the LLM is available, False if not.
    """
    headers = {"Content-type": "application/json"}

    # Authorization is not always needed
    if llm_api.auth:
        headers["Authorization"] = llm_api.auth.get_auth_header()

    url = f"{str(llm_api.url).rstrip('/')}/models"

    try:
        async with httpx.AsyncClient() as client:
            response = await client.get(
                url,
                headers=headers,
                timeout=timeout_in_s,
            )
        logger.debug(f"Calling LLM API: {llm_id=}, {url=}, {response.status_code=}")
    except Exception as e:
        logger.warning(
            f"Connection to API ({url}) could not be established for LLM {llm_id}. Error message: {e}"
        )
        return False

    if response.status_code != HTTPStatus.OK:
        logger.warning(
            f"Response of API ({url}) was not OK for LLM {llm_id}. {response.status_code=}"
        )
        return False

    # check if the request model is in any of the data.id entries of the response
    api_models = [api_model.get("id") for api_model in response.json().get("data", [])]
    if llm_model not in api_models:
        logger.warning(
            f"Requested model ({llm_model}) is not provided by the API ({url}) for LLM {llm_id}. "
            f"Available models are: {api_models}"
        )
        return False

    return True
process_configs

Methods to load and config and start checks of config integrity.

FUNCTION DESCRIPTION
load_all_configs

Load config settings from respective paths.

load_from_yml_in_pydantic_model

Load config from 'list_of_yaml_paths' into given pydantic-Model.

load_yaml

Load a yaml file.

merge_specific_cfgs_in_place

Copy Prompt-config to appropriate section in general llm_config. Edit in-place!

postprocess_configs

Post-Process loaded configs.

remove_unavailable_models

Remove models from all useacases, if they are not in 'active_models'. Edit in-place!

load_all_configs
load_all_configs(general_config_paths, path_to_llm_prompts, path_to_llm_model_configs)

Load config settings from respective paths.

PARAMETER DESCRIPTION
general_config_paths

Path to config, matching 'Settings'

TYPE: Path

path_to_llm_prompts

Path to config, matching 'LLMPromptMaps'

TYPE: Path

path_to_llm_model_configs

Path to config, matching 'LLMConfig'

TYPE: Path

RETURNS DESCRIPTION
tuple[Settings, LLMConfig]

Config loaded into their Pydantic Model.

Source code in docs/microservices/template/src/utils/process_configs.py
def load_all_configs(
    general_config_paths: Path,
    path_to_llm_prompts: Path,
    path_to_llm_model_configs: Path,
) -> tuple[Settings, LLMConfig]:
    """Load config settings from respective paths.

    Args:
        general_config_paths (Path): Path to config, matching 'Settings'
        path_to_llm_prompts (Path): Path to config, matching 'LLMPromptMaps'
        path_to_llm_model_configs (Path): Path to config, matching 'LLMConfig'

    Returns:
        Config loaded into their Pydantic Model.
    """
    settings = load_from_yml_in_pydantic_model(general_config_paths, Settings)
    llm_prompts = load_from_yml_in_pydantic_model(path_to_llm_prompts, LLMPromptMaps)
    llm_config = load_from_yml_in_pydantic_model(path_to_llm_model_configs, LLMConfig)

    postprocess_configs(settings, llm_prompts, llm_config)

    return settings, llm_config
load_from_yml_in_pydantic_model
load_from_yml_in_pydantic_model(yaml_path, pydantic_reference_model)

Load config from 'list_of_yaml_paths' into given pydantic-Model.

PARAMETER DESCRIPTION
yaml_path

Yaml to load

TYPE: Path

pydantic_reference_model

Pydantic model to load yaml into

TYPE: BaseModel

RETURNS DESCRIPTION
BaseModel

BaseModel derived pydantic data class.

Source code in docs/microservices/template/src/utils/process_configs.py
def load_from_yml_in_pydantic_model(
    yaml_path: Path, pydantic_reference_model: BaseModel
) -> BaseModel:
    """Load config from 'list_of_yaml_paths' into given pydantic-Model.

    Args:
        yaml_path (Path): Yaml to load
        pydantic_reference_model (BaseModel): Pydantic model to load yaml into

    Returns:
        BaseModel derived pydantic data class.

    """
    data = load_yaml(yaml_path)

    try:
        pydantic_class = pydantic_reference_model(**data)
        logger.info(f"Config loaded from: '{yaml_path}'")
        return pydantic_class

    except ValidationError as e:
        logger.critical(f"Error loading config: '{e}'")
        raise e
load_yaml
load_yaml(yaml_path)

Load a yaml file.

PARAMETER DESCRIPTION
yaml_path

Path to yaml

TYPE: Path

RETURNS DESCRIPTION
dict[str, Any]

Content of loaded yaml.

Source code in docs/microservices/template/src/utils/process_configs.py
def load_yaml(yaml_path: Path) -> dict[str, Any]:
    """Load a yaml file.

    Args:
        yaml_path (Path): Path to yaml

    Returns:
        Content of loaded yaml.
    """
    if not yaml_path.exists():
        logger.error(f"Invalid path: '{yaml_path}'")
        raise FileNotFoundError

    with open(yaml_path) as file:
        return yaml.safe_load(file)
merge_specific_cfgs_in_place
merge_specific_cfgs_in_place(llm_config, llm_prompts)

Copy Prompt-config to appropriate section in general llm_config. Edit in-place!

Only if 'prompt_map' in LLMConfig can be found in LLMPromptMaps, it will be merged. i.e. try to generalize sth. like this:

cfg["phi3:mini"].prompt_config = prompt[cfg["phi3:mini"].prompt_map]

PARAMETER DESCRIPTION
llm_config

Target for merge of Prompt parameter

TYPE: LLMConfig

llm_prompts

Source to merge Prompt parameter from

TYPE: LLMPromptMaps

RETURNS DESCRIPTION
bool

True if no problems occurred.

Source code in docs/microservices/template/src/utils/process_configs.py
def merge_specific_cfgs_in_place(
    llm_config: LLMConfig, llm_prompts: LLMPromptMaps
) -> bool:
    """Copy Prompt-config to appropriate section in general llm_config. Edit in-place!

    Only if 'prompt_map' in LLMConfig can be found in LLMPromptMaps, it will be merged.
    i.e. try to generalize sth. like this:

    cfg["phi3:mini"].prompt_config = prompt[cfg["phi3:mini"].prompt_map]

    Args:
        llm_config (LLMConfig): Target for merge of Prompt parameter
        llm_prompts (LLMPromptMaps): Source to merge Prompt parameter from

    Returns:
        True if no problems occurred.

    """
    no_issues_occurred = True
    for usecase in llm_config:
        # load identical usecases, i.e. chat, RAG
        try:
            cfg = getattr(llm_config, usecase)
            prompt = getattr(llm_prompts, usecase)
        except AttributeError:
            logger.warning(
                f"Usecase '{usecase}' not matching between prompt- and general llm config. \
                    Skipping cfg-merge for '{usecase}' .."
            )
            no_issues_occurred = False
            continue

        # copy prompt config to its usecase- and model-counterpart
        for model in cfg:
            prompt_map_to_use = cfg[model].prompt_map
            if prompt_map_to_use in prompt:
                cfg[model].prompt_config = prompt[prompt_map_to_use]
            else:
                logger.warning(
                    f"'prompt_map: {prompt_map_to_use}' from LLM-config not in prompt-config for '{usecase}'. \
                        Skipping .."
                )
                no_issues_occurred = False
                continue

    return no_issues_occurred
postprocess_configs
postprocess_configs(settings, llm_prompts, llm_config)

Post-Process loaded configs.

Remove unused models (from settings.active_models), merge LLMPromptMaps into LLMConfig.

PARAMETER DESCRIPTION
settings

Config matching pydantic 'Settings'.

TYPE: Settings

llm_prompts

Config matching pydantic 'LLMPromptMaps'.

TYPE: LLMPromptMaps

llm_config

Config matching pydantic 'LLMConfig'.

TYPE: LLMConfig

RETURNS DESCRIPTION
LLMConfig

Merged and filtered LLM configuration.

Source code in docs/microservices/template/src/utils/process_configs.py
def postprocess_configs(
    settings: Settings, llm_prompts: LLMPromptMaps, llm_config: LLMConfig
) -> LLMConfig:
    """Post-Process loaded configs.

    Remove unused models (from settings.active_models), merge LLMPromptMaps into LLMConfig.

    Args:
        settings (Settings): Config matching pydantic 'Settings'.
        llm_prompts (LLMPromptMaps): Config matching pydantic 'LLMPromptMaps'.
        llm_config (LLMConfig): Config matching pydantic 'LLMConfig'.

    Returns:
        Merged and filtered LLM configuration.
    """
    remove_unavailable_models(llm_config, settings.active_llms)
    merge_specific_cfgs_in_place(llm_config, llm_prompts)

    return llm_config
remove_unavailable_models
remove_unavailable_models(input_config, active_models)

Remove models from all useacases, if they are not in 'active_models'. Edit in-place!

PARAMETER DESCRIPTION
input_config

Config to change

TYPE: LLMConfig

active_models

Models to keep - remove other

TYPE: list[str]

Source code in docs/microservices/template/src/utils/process_configs.py
def remove_unavailable_models(
    input_config: LLMConfig, active_models: list[str]
) -> None:
    """Remove models from all useacases, if they are not in 'active_models'. Edit in-place!

    Args:
        input_config (LLMConfig): Config to change
        active_models (list[str]): Models to keep - remove other
    """
    for usecase in input_config:
        cfg = getattr(input_config, usecase)
        available_models_for_usecase = getattr(active_models, usecase)
        for model in list(cfg):
            if model not in available_models_for_usecase:
                cfg.pop(model)