Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs (Large Language Models) are one type of model that primarily involves written or spoken text. However, I wonder if there are other types of data that could be transformed into something similar to an LLM. For instance, could gestures or movement be used in this way? Is it possible to create a language that accurately describes human movement? If we could capture and describe enough of these movements globally, could we develop a model that functions similarly to a large language model but focuses on movement? I'd like to hear any thoughts on this.


I think the transformers can be leveraged in a similar way for robotics. Prior art probably exists in self-driving or humanoids.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: