Never before in my life had I seen so many smart people working on the same goal: making LLMs better. After talking to many people working in both industry and academia, I noticed the 10 major research directions that emerged. The first two directions, hallucinations and context learning, are probably the most talked about today. I’m the most excited about numbers 3 (multimodality), 5 (new architecture), and 6 (GPU alternatives).
Sadly, I have nothing valuable to add to this conversation.
Except that I misread LLM as LMG and thought this was another post about them before I got really confused. Then I realized it might not entirely be off, before checking again and realizing that it definitely is.
Sadly, I have nothing valuable to add to this conversation.
Except that I misread LLM as LMG and thought this was another post about them before I got really confused. Then I realized it might not entirely be off, before checking again and realizing that it definitely is.