I'm developing a small Python webapp as some sort of finger exercise. Mostly a chatbot. I'm using the Quart framework, which is pretty much alike Flask, just async. Now I want to connect that to a LLM inference endpoint. And while I could do the HTTP requests myself, I'd prefer something that does that for me. It should support the usual OpenAI style API, in the end I'd like it to connect to things like Ollama and KoboldCPP. No harm if it supports image generation, agents, tools, vector databases, but that's optional.