Meta Platforms just lately unveiled its in-house customized chip “family” aimed toward enhancing artificial intelligence (AI) work. The company developed its first-generation chip in 2020 as part of the Meta Training and Inference Accelerator (MTIA) programme, to enhance efficiency for suggestion fashions utilized in serving advertisements and other content material in information feeds.
The first MTIA chip was designed solely for an AI course of known as inference, where algorithms educated on giant amounts of data make judgments about which content to show subsequent in a user’s feed.
Joel Coburn, a software engineer at Meta, explained that the corporate initially used graphics processing units (GPUs) for inference tasks however found them ill-suited for the job. He stated…
“Their effectivity is low for real models, despite important software optimizations. This makes them difficult and expensive to deploy in follow. This is why we’d like MTIA.”
A spokesperson for Meta did not present details on deployment timelines for the model new chip or plans to develop chips for coaching models. The firm has been working on upgrading its AI infrastructure prior to now 12 months after recognising that it lacked the required hardware and software program to help AI-powered features. Consequently, Meta scrapped plans for a large-scale rollout of its in-house inference chip and began engaged on a extra ambitious chip able to performing both training and inference tasks.
While Track record struggled with high-complexity AI fashions, it managed low- and medium-complexity models more effectively than competitor chips. The chip consumed solely 25 watts of power, significantly less than market-leading chips from suppliers like Nvidia Corp, and used an open-source chip architecture referred to as RISC-V.
Meta also introduced plans to redesign its information centres with modern AI-oriented networking and cooling methods, with the first facility set to interrupt ground this yr. The new design is anticipated to be 31% cheaper and built twice as quick because the company’s current knowledge centres..

Leave a Reply

Your email address will not be published. Required fields are marked *