

It’s hardly their fault for thinking it was related to the AI LLM or multimodal models when in all actuality the article states that these “large physics models” may be any sort of configuration, including LLM transformers:
the models may use the transformer architecture that underlies LLMs, a generalized version of convolutional neural networks known as geometric deep learning, or an architecture that can solve partial differential equations called neural operators.
It seemed you really needed to take your frustrations out on someone else’s comment.


This is actually supported by GrapheneOS currently, if you need that extra push. 😉
You might be interested in GadgetBridge.