

Oops, my mistake
Apology accepted. Have a great day!


Oops, my mistake
Apology accepted. Have a great day!


Read my prior post, I specifically SAID it was a model number.
You’re embarrassing yourself with your pedantry. You said 80486 didn’t exist. It did. Seriously, quit while you’re behind here.


Such a confident answer! And so incorrect too!



Honestly, we know where the root of this problem came from. Back in the 1990s Intel broke with convention of using ever increasing numeric model numbers
Intel didn’t like that other CPU manufacturers of x86 CPUs (AMD, Cyrix, IBM) could use the same numbering scheme. So Intel created “Pentium” because it could be copyrighted/trademarked so other companies couldn’t use it.


That’s my bad for not remembering AMD’s fucking atrocious nonstandard mobile chip naming schemes.
Atrocious compared to Intel? The first CPU with the name Core i7 was released in 2008, but Intel is still releasing a CPU named Core i7 as recently as 2023. They both suck, but in different ways.


I’m not sure we can use the “Windows x86 vs Windows ARM” analog for this new unit from Apple. MacOS Tahoe is a native ARM OS on both the high end and now this low end unit. With Windows its a completely different CPU architecture.
Apple has to know this is going to cannibalize its low end (8GB/256GB SSD) Macbook Air line. So will Apple discontinue the low config Air or is there some other differentiator that still makes the low config Air compelling?


If its the full macOS, I don’t think we can say that. That’s what makes this so interesting as it is a first of its kind.
Now, if it performs like a dog compared to an equivalent spec M3 or M4 Macbook Air, then we probably could call it a glorified tablet.


Especially when they’re using it as a defense to use racial slurs in a Wal-Mart on a Saturday afternoon.


Tried the same thing in Asahi but without macOS’ memory management and access to GPU acceleration, it just wasn’t feasible.
Thank you for sharing this result. I knew Asahi’s memory management wasn’t as robust (so I got a 24GB RAM M2 unit to overcome this).
For your macOS Ollama implementation are you able to leverage the NPU in the hardware (which I know is also unavailable so far in Asahi)?


This was my first question. This laptop looks like a really strange bird from the hardware point of view. It runs OSX (Tahoe), but uses an iPhone/iPad CPU (not an M1 or M2 CPUs that Asahi runs on today).


Powered by A18 Pro
Completing the MacBook Neo experience is macOS Tahoe
Woah, this is new! A version of Mac OSX running on a iPhone/iPad CPU.


What was once old is new again!


I mean good-ish in the lesser-evil type of thing. I don’t expect any of those to be 100% ethical but there are some that are a lot worse than others
Ethics are subjective. “Good-ish” to you may mean you’re fine if its trained on copyrighted works as long as it wasn’t done with electricity from diesel generators belching exhaust into the local Memphis atmosphere (I’m looking at you Grok). Llama doesn’t do the diesel generator thing, but its a product of Facebook corporation. So is that “good-ish” to you or not? I don’t know. That’s up to you.
It may not be fast, but your i3 laptop with 12GB of system RAM can absolutely run a local LLM. This is where that “performance/accuracy” question I raised comes in. It won’t be very fast, and you won’t be able to run the most common large models like GPT-5 etc. However, if your needs are light, light models exist. Give this a read


Depends on your definition of “good-ish”. Do you mean:
Running one locally on your own hardware would likely reach “good-ish” with some sacrifices against performance/accuracy (unless you’ve got a lot of expensive hardware to run very large models). As far as ethical origins, there are few small models trained on public domain/nonstolen content, but their functions are far more limited.


Someone see if “Micros|op” (with the pipe character) or “MicrosIop” (with a capital letter i) is also blocked.


If you play with the parameters you can make all kinds of things happen, but all of those things are still driven by the existing information it already has or can find. It can mash things together in random new ways, but it will always work with components that already exist.
Or purely randomness, but the spirit of your point is sound. And if it is randomness it may be unique output, but the utility of that result may be zero.
There is no awareness of context or meaning that would allow it to make intelligent choices about what it mashes together. That will always be driven by the patterns it already knows, positively or negatively.
100% AGREE. LLMs are not “thinking”. LLMs are NOT the HAL 9000 from the movie 2001: A space odyssey
It’s like doing chemistry by picking random bottles from the shelf and dumping them into a beaker to see what happens. You could make an amazing discovery that way, but the chances of it happening are very, very low. And even if it does happen, there’s an excellent chance that you won’t recognize it.
100% AGREE.
I’m in favor of using LLMs for tasks that involve large-scale data analysis. They can be quite helpful, as long as the user understands their limitations and performs due diligence to validate the results.
Unfortunately what we are mostly seeing are cases where LLMs are used to generate boilerplate text or code that is assembled from a vast collection of material that someone who actually knew what they were doing had previously created. That kind of reuse is not inherently bad, but it should not be confused with what competent writers or coders do. And if LLMs really do take over a lot of routine daily tasks from people, the pool of approaches to those tasks will stagnate, and eventually degenerate, as LLMs become the primary sources of each others’ solutions.
100% agree. The degeneration is already occurring because bad LLM output is being fed back in as authoritative training data resulting in confidently wrong answers being presented as truth. Critical thinking seems to have become an endangered species in the last 20 years and I’m really worried that people are trusting LLM chatbots completely and never challenging the things they output but instead accepting them as fact (and acting on those wrong things!).
LLMs may very well change the world, but not it in the ways most people expect. Companies that have invested heavily in them are pushing them as the solutions to the wrong problems.
I think we have some of the pieces today that will make AI in general more trustworthy in the future. Grounding can go part way to making today’s LLMs more trustworthy. If an LLM claims something as fact, it should be able to produce the citation that supports it (outside of LLM output). That source can then be evaluated critically. Today’s grounding doesn’t go far enough though. An LLM today will say “I got that from HERE” and simply give a document. It won’t show the page or line of text and supporting arguments that would justify its arrival at its stated output. It can’t do these things today because I just described reasoning which is something an LLM is NOT capable of. So we wait for true AGI instead.


LLMs are not capable of creating anything, including code. They are enormous word-matching search engines that try to find and piece together the closest existing examples of what is being requested. If what you’re looking for is reasonably common, that may be useful.
Just for common understanding, you’re making blanket statements about LLMs as though those statements apply to all LLMs. You’re not wrong if you’re generally speaking of the LLM models deployed for retail consumption like, as an example, ChatGPT. None of what I’m saying here is a defense about how these giant companies are using LLMs today. I’m just posting from a Data Science point of view on the technology itself.
However, if you’re talking about the LLM technology, as in a Data Science view, your statements may not apply. The common hyperparameters for LLMs are to choose the most likely matches for the next token (like the ChatGPT example), but there’s nothing about the technology that requires that. In fact, you can set a model to specifically exclude the top result, or even choose the least likely result. What comes out when you set these hyperparameters is truly strange and looks like absolute garbage, but it is unique. The result is something that likely hasn’t existed before. I’m not saying this is a useful exercise. Its the most extreme version to illustrate the point. There’s also the “temperature” hyperparamter which introduces straight up randomness. If you crank this up, the model will start making selections with very wide weights resulting in pretty wild (and potentially useless) results.
What many Data Scientists trying to make LLMs generate something truly new and unique is to balance these settings so that new useful combinations come out without it being absolute useless garbage.


while at the same time, ignoring Windows telemetry,
You’re posting this statement on Lemmy? There is a dispropotionatly high population of Linux and OSX users here. Most of those here ignoring Windows telemetry aren’t running Windows.


It also has a good use of being the toilet of browsers. As in, if you ever are required to temporarily install some pervasive plugin or extension to take a proctored exam or something, Edge is good to use because you know you won’t use the that browser for anything you care about and you can protect good browsers from those garbage plugins.
They could be blocking entire IP ranges. So they wouldn’t have to store specific IPs. I’m not in the hosting industry but I would imagine there are groups tracking the CIDR blocks (IP ranges) that VPN providers use for their exit nodes. If such a list exists, a host could simply subscribe to accept whatever updates occur to those lists and implement the block for them.