https://twitter.com/adamcohenhillel/status/1781490719997526210
1.89 tokens/s so - talking speed
Wondering if anyone is trying this, looking into it, or could chime in on practicality? Would ideally want to just give it free reign of its tokens and ability to use the Nybble api commands, with maybe a memory, some base instructions, and mechanisms to break it out of loops, and let it roam free in the household. Has anyone figured out a plausible self-charging setup? e.g. parking on a wireless charging port? Any idea how much of the pi5 would be needed for Nybble itself vs Llama3? How bout voice recognition/generation?