Can you describe what you'd like to see for #1? We currently show everything, but let people filter via the UI or URL param, e.g., https://thefastest.ai/?mf=3-70
The basic idea would be a simple voice conferencing bridge that you connect to via WebTransport. There are a number of more interesting things that we would be interested in if we could get this sort of basic scenario working.
I agree caution is needed here. We have taken a few steps:
- Rate limits are enforced to provide caps on agent and function usage.
- Execution depth is capped to prevent the LLM from getting into loops.
- Function output is sanitized to prevent corruption of LLM state.
- Functions execute in a completely separate environment from the rest of the service, including the LLM, to reduce the impact from bad functions.
Note that this doesn't entirely prevent against "; DROP TABLES"-type hacks against the implementation of the function, but that problem isn't unique to us. It may however be possible for the LLM to look at function inputs and flag overtly malicious ones.
Current context lengths are usually more than adequate for these interactions; the details of each individual step within an execution only need to be retained until the final response is emitted.
yes, the LLM sees the result of the function and processes it according to what it has learned from its few-shots (which may involve calling more functions, or returning a formatted response).