|
| 1 | +# Agentic AI architecture |
| 2 | + |
| 3 | +## Can we use native thinking mode of models instead of using the think tool with RequirementAgent? Would it be possible to make that work with BeeAI? If not, can we tune arguments of RequirementAgent? |
| 4 | + |
| 5 | +LiteLLM supports translating `reasoning_effort` option into model specific reasoning/thinking configuration, |
| 6 | +allowing to control the level of reasoning of models that support it. It also converts model thoughts |
| 7 | +into `reasoning_content` (universal) and `thinking_blocks` (Anthropic-specific) attributes in assistant messages. |
| 8 | + |
| 9 | +Docs: https://docs.litellm.ai/docs/reasoning_content |
| 10 | + |
| 11 | +BeeAI doesn't support this, but implementing it is quite simple, it's just about propagating the options/attributes |
| 12 | +and combining it with a custom middleware for instance. |
| 13 | + |
| 14 | +So the answer is yes, this is definitely possible. However, as it turns out, Anthropic models don't support reasoning |
| 15 | +together with tool constraints. Considering that BeeAI `RequirementAgent` is built on tool constraints |
| 16 | +and the `ToolCallingAgent` has just been deprecated in favor of it, this means Anthropic models can't work with BeeAI |
| 17 | +with reasoning enabled. |
| 18 | + |
| 19 | +## Is the BeeAI's "RequirementAgent with conditions" concept best for our purposes? Do the benefits outweigh the model-specific issues with tool calls? |
| 20 | + |
| 21 | +Without model-native reasoning enabled, I would say it is necessary. Being able to observe model thoughts is essential |
| 22 | +for debugging. With model-native reasoning, we no longer need the `Think` tool and the benefits of the requirements |
| 23 | +become questionable. But it's probably the only way how to force a model to call a certain tool when it tends not to do it, |
| 24 | +and I remember some instances of this happening (upstream patch verification). That being said, even without |
| 25 | +the `Think` tool we would still have issues with models that don't want to respect the constraints. |
| 26 | + |
| 27 | +## Do we need a framework at all? Would it be possible to build the agents directly on top of VertexAI API? Would that bring us any benefits? |
| 28 | + |
| 29 | +Probably not, however, using a framework gives us more flexibility if we want to switch providers in the future, |
| 30 | +and gives us the benefits of API unification and caching. Also, I've just learned that for example Anthropic models |
| 31 | +via VertexAI API behave a bit differently than via Anthropic API, so in case we would decide to go with |
| 32 | +a pure VertexAI client we could be locking ourselves out from some features. |
| 33 | + |
| 34 | +I think preserving at least LiteLLM as a low-level layer makes sense, but I believe also BeeAI has its place. |
| 35 | +One of the biggest disadvantages is its rapid development that brings breaking changes and not keeping up with them |
| 36 | +just makes it more difficult to adapt in the future. On the other hand, the upstream is very responsive and supportive, |
| 37 | +paying attention to our issues even if they may not be strictly aligned with their focus. |
| 38 | + |
| 39 | +I think we should decide on how flexible we need to be going forward and do further research, for example |
| 40 | +trying to implement a VertexAI API client demo agent and an agent build on top of LiteLLM without BeeAI, and evaluate |
| 41 | +pros and cons of each approach. We also need to consider the previous points (tool constraints and reasoning). |
| 42 | + |
| 43 | +## How complicated it would be to get a more searchable solution when debugging agentic runs? Phoenix is amazing in visualizing the runs but the lack of "easy search in a text file" makes debugging longer. Also the default BeeAI Middleware prints everything which makes the output hard to consume. |
| 44 | + |
| 45 | +It should be quite easy, just implementing a custom middleware. It could mirror what `openinference-instrumentation-beeai` |
| 46 | +traces and Phoenix displays, or it can be something else, thanks to the BeeAI emitter system the possibilities are virtually |
| 47 | +limitless. Even the default middleware has several filters and options that if tweaked could make it more usable, |
| 48 | +but I think implementing a custom middleware is the way to go. It will be also necessary for logging model-native |
| 49 | +reasoning content (see the first point). |
0 commit comments