Simple LLM Understanding
Someone asked me about the mental models I use for implementing and test LLMs. A quick note about it here.
(Open GPT based models) If you have worked with multi disciplinary teams, it’s very easy I feel. Think about it like you would when you communicate between multi modal teams. Address the LLM as a team member with their own backstory and nuance. Split into parts:
- Understand that LLM needs high level context like any team mate and that it’s “American” unless you tell the system prompt other wise. (This permeates through the way of interacting, referencing and output styles)
- To allow some space for “chain of thought”. Atleast in context of eval. This has really worked for me - esp with GPT4 - you can really make it think like you.
- If the Context is pure output (eg. letter/comms/report), use styles and examples. Rest of the prompt need not be lengthy.
- Test and compare - prompting works differently for different tasks. So eventually it’s on you.
Loves and cares about deep, caring and evolved entanglements with AI.