While Nvidia sells AI chips for $ 30,000, researchers show how much AI an ancient PC from 1998 can

Even under Windows 98 and with very old hardware, AI can run based on a Large Language model. (Image: Stock.adobe.com - Carsten Reisinger)

Artificial intelligence has been on everyone's lips for years and manufacturers such as Nvidia make a lot of money with hardware designed on them.

The latest chip Blackwell B200 for AI training customers probably costs around $ 30,000 to $ 40,000, such asComputerbaseDespite a certain relativization by Nvidia CEO Jensen Huang.

Instead, if it is about the local calculation of AI requests, a team of researchers from the University of Oxford has now achieved remarkable success with very old and slow hardware (viaTechspot).

That happened:

  • The organization Exo Labs founded by the Oxford researchers in question hasdivided a video at x. It shows a PC with Pentium II with 350 MHz, 128 MByte RAM and Windows 98, on which a AI model runs successfully.
  • According to the Techspot, it is apowerful AI language modelbased on Llama2.c code. In this case it tells the very scarce requestSleepy Joe saidA short, reasonably coherent (if not really sensible) story, and that pretty quickly.
  • The model consists of a total of 260,000 parameters, which were calculated by the old PC at a speed of 39.31 token per second. If a model with one billion parameters is used, it is only 0.0093 tokens per second.

What are tokens?Among other things, they are used as aids when dealing with AI with natural language. To put it simply, disassemble the text into better parts to be processed such as words, parts of words or punctuation marks and individual letters.

The more of these individual parts or tokens per second can process a AI, the faster it answers to inquiries.

In the following you will find the exact (English) wording of the text that the AI ​​created on the Windows 98 PC:

Sleepy Joe said: "Hello, Spot. Do you want to be careful with me?" Spot told Spot, "Yes, I will step back!" Spot replied, "I lost my broken broke in my cold rock. It is okay, you can't." Spoon and Spot went to the top of the rock and pulled his broken rock. Spot was sad because he was not lucky anymore. Spot thought for a moment and looked for a place. He said, "Don't worry, Spoon, you can make your broken pieces of the door for anyone today.

Certainly nothing with which a current AI provider would go home. In view of the badly limited hardware resources that were used here, it is still remarkable.

What is the goal of the researchers?

According to Techspot, Exo Labs wants to access Kidemocratize. Comparatively low hardware requirements could make an important contribution.

Basically, two different but closely contiguous areas should be considered:

  1. On the one hand, training suitable AI models to create the necessary basis for inquiries to a AI.
  2. On the other hand, editing the relevant user requests using these AI models.

In both cases, the lowest possible hardware requirements are desirable, both for low entry hurdles and with a view to the required energy requirements.

It remains to be seen to what extent the further research of Exo Labs can make a decisive contribution.