Meta Release
2021
2022
2023
2024
2024-07-01
  • An anonymous reader quotes a report from Ars Technica: _Meta continues to hit walls with its heavily scrutinized plan to comply with the European Union's strict online competition law, the Digital Markets Act (DMA), by [offering Facebook and Instagram subscriptions](https://arstechnica.com/tech-policy/2024/07/metas-pay-for-privacy-plan-falls-afoul-of-the-law-eu-regulators-say/) as an alternative for privacy-inclined users who want to opt out of ad targeting. Today, the European Commission (EC) [announced](https://ec.europa.eu/commission/presscorner/detail/en/ip_24_3582) preliminary findings that Meta's so-called "pay or consent" or "pay or OK" model -- which gives users a choice to either pay for access to its platforms or give consent to collect user data to target ads -- is not compliant with the DMA. According to the EC, Meta's advertising model violates the DMA in two ways. First, it "does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the 'personalized ads-based service." And second, it "does not allow users to exercise their right to freely consent to the combination of their personal data," the press release said. Now, Meta will have a chance to review the EC's evidence and defend its policy, with today's findings kicking off a process that will take months. The EC's investigation is expected to conclude next March. Thierry Breton, the commissioner for the internal market, said in the press release that the preliminary findings represent "another important step" to ensure Meta's full compliance with the DMA. "The DMA is there to give back to the users the power to decide how their data is used and ensure innovative companies can compete on equal footing with tech giants on data access," Breton said. A Meta spokesperson told Ars that Meta plans to fight the findings -- which could trigger fines up to 10 percent of the company's worldwide turnover, as well as fines up to 20 percent for repeat infringement if Meta loses. The EC agreed that more talks were needed, writing in the press release, "the Commission continues its constructive engagement with Meta to identify a satisfactory path towards effective compliance." _Meta continues to claim that its "subscription for no ads" model was "endorsed" by the highest court in Europe, the Court of Justice of the European Union (CJEU), last year. "Subscription for no ads follows the direction of the highest court in Europe and complies with the DMA," Meta's spokesperson said. "We look forward to further constructive dialogue with the European Commission to bring this investigation to a close." Meta rolled out its ad-free subscription service option [last November](https://tech.slashdot.org/story/23/10/30/1229247/facebook-and-instagram-to-offer-subscription-for-no-ads-in-europe). "Depending on where you purchase it will cost $10.5/month on the web or $13.75/month on iOS and Android," said the company in a blog post. "Regardless of where you purchase, the subscription will apply to all linked Facebook and Instagram accounts in a user's Accounts Center. As is the case for many online subscriptions, the iOS and Android pricing take into account the fees that Apple and Google charge through respective purchasing policies."
2024-07-17
  • 174524445 story [![EU](//a.fsdn.com/sd/topics/eu_64.png)](//meta.slashdot.org/index2.pl?fhfilter=eu)[![AI](//a.fsdn.com/sd/topics/ai_64.png) ](//meta.slashdot.org/index2.pl?fhfilter=ai)[![Facebook](//a.fsdn.com/sd/topics/facebook_64.png) ](//meta.slashdot.org/index2.pl?fhfilter=facebook)[![Slashdot.org](//a.fsdn.com/sd/topics/meta_64.png)](//meta.slashdot.org/index2.pl?fhfilter=meta) Posted by [BeauHD](https://www.linkedin.com/in/beauhd/) on Wednesday July 17, 2024 @06:02PM from the uncertain-future dept. According to Axios, Meta will [withhold future multimodel AI models from customers in the European Union](https://www.axios.com/2024/07/17/meta-future-multimodal-ai-models-eu) "due to the unpredictable nature of the European regulatory environment." From the report: _Meta plans to incorporate the new multimodal models, which are able to reason across video, audio, images and text, in a wide range of products, including smartphones and its Meta Ray-Ban smart glasses. Meta says its decision also means that European companies will not be able to use the multimodal models even though they are being released under an open license. It could also prevent companies outside of the EU from offering products and services in Europe that make use of the new multimodal models. The company is also planning to release a larger, text-only version of its Llama 3 model soon. That will be made available for customers and companies in the EU, Meta said. Meta's issue isn't with the still-being-finalized AI Act, but rather with how it can train models using data from European customers while complying with GDPR -- the EU's existing data protection law. Meta announced in May that it planned to use publicly available posts from Facebook and Instagram users to train future models. Meta said it sent more than 2 billion notifications to users in the EU, offering a means for opting out, with training set to begin in June. Meta says it briefed EU regulators months in advance of that public announcement and received only minimal feedback, which it says it addressed. In June -- after announcing its plans publicly -- Meta was ordered to pause the training on EU data. A couple weeks later it received dozens of questions from data privacy regulators from across the region. The United Kingdom has a nearly identical law to GDPR, but Meta says it isn't seeing the same level of regulatory uncertainty and plans to launch its new model for U.K. users. A Meta representative told Axios that European regulators are taking much longer to interpret existing law than their counterparts in other regions. A Meta representative told Axios that training on European data is key to ensuring its products properly reflect the terminology and culture of the region. _
2024-07-18
  • 174531483 story [![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook)[![AI](//a.fsdn.com/sd/topics/ai_64.png)](//tech.slashdot.org/index2.pl?fhfilter=ai) Posted by msmash on Thursday July 18, 2024 @12:40PM from the making-a-statement dept. Meta says it [won't be launching its upcoming multimodal AI model](https://www.theverge.com/2024/7/18/24201041/meta-multimodal-llama-ai-model-launch-eu-regulations) -- capable of handling video, audio, images, and text -- in the European Union, citing regulatory concerns. From a report: _The decision will prevent European companies from using the multimodal model, despite it being released under an open license. Just last week, the EU finalized compliance deadlines for AI companies under its strict new AI Act. Tech companies operating in the EU will generally have until August 2026 to comply with rules around copyright, transparency, and AI uses like predictive policing. Meta's decision follows a similar move by Apple, which recently said it would likely exclude the EU from its Apple Intelligence rollout due to concerns surrounding the Digital Markets Act._
2024-07-19
  • Meta’s cost-cutting efforts at its metaverse division, Reality Labs, could help save the company $3 billion, Bank of America analysts said Friday. [Meta is reportedly cutting the budget for its Reality Labs](https://www.theinformation.com/articles/reality-comes-to-metas-reality-labs?rc=5xvgzc) hardware division, which makes its VR headsets, by about 20% between this year and 2026, The Information reported Thursday, citing unnamed sources. That doesn’t mean the company is halting its virtual and augmented reality innovations: the company is planning to release new Quest headsets and AR glasses in the next three years, the outlet said. The cost-cutting at Reality Labs is instead meant to put the division’s seemingly out of control spending under lock. While Bank of America’s Justin Post and Nitin Bansal said in a research note Friday that Meta could save an estimated $3 billion, they added that some of those cost savings could be reallocated to Meta’s AI efforts. But those efforts are also being put on hold in some regions (i.e. the European Union and Brazil) as [Meta looks to avoid growing regulatory scrutiny in the AI space](https://qz.com/meta-pause-generative-ai-brazil-multimodal-model-eu-1851599618). Meta’s plans for AI and virtual reality will likely come into clearer focus when the Facebook and Instagram parent reports its second quarter earnings on July 31. Meta CEO Mark Zuckerberg has repeatedly reiterated his belief that the capital-m Metaverse is the future. “We continue making steady progress building the metaverse as well,” he said in a call with investors in March, discussing [the company’s first quarter financial results](https://qz.com/meta-metaverse-facebook-earnings-mark-zuckerberg-1851433524). In the same breath, Meta reported a loss of $3.8 billion for its Reality Labs division. The company’s VR and AR efforts are surely still a money-loser for Meta, but Reality Labs is at least finding ways to shrink its losses — which fell 17% between the last three months of 2023 and the first quarter of 2024. Bank of America analysts maintained their buy rating of Meta’s stock on Friday. They see shares rising nearly 15% to $550 over the next year. By the numbers -------------- **$55 billion:** How much Meta’s Reality Labs has lost the company since 2019 **30%:** How much first quarter sales for Reality Labs, which totaled $440 million, rose from last year **$3 billion:** How much Meta could save with new cost-cutting measures at Reality Labs **14.8%:** How much Bank of America analysts see Meta’s share price rising over the next year — from $479 to $550
  • Meta quickly [shifted away from the metaverse](https://qz.com/meta-layoffs-2023-jobs-metaverse-ai-1850196575) to generative artificial intelligence, and now it’s pumping the brakes on some of its efforts amid regulatory scrutiny. On Wednesday, Meta said it was [pausing the use of its generative AI tools in Brazil](https://www.reuters.com/technology/artificial-intelligence/meta-decides-suspend-its-generative-ai-tools-brazil-2024-07-17/) due to opposition from the country’s government over the company’s privacy policy on personal data and AI, according to Reuters. Meta was [banned from training its AI models](https://www.gov.br/anpd/pt-br/assuntos/noticias/anpd-determina-suspensao-cautelar-do-tratamento-de-dados-pessoais-para-treinamento-da-ia-da-meta) on Brazilians’ personal data by the country’s National Data Protection Authority (ANPD) earlier this month. The Facebook-owner had updated its privacy policy in May to give itself [permission to train AI on public Facebook, Messenger, and Instagram data](https://about.fb.com/br/news/2024/05/como-a-meta-esta-desenvolvendo-a-inteligencia-artificial-para-o-brasil/) in Brazil. The [ANPD said Meta’s privacy policy](https://apnews.com/article/brazil-tech-meta-privacy-data-93e00b2e0e26f7cc98795dd052aea8e1) has “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects,” according to the Associated Press. Meanwhile, Meta has decided to [not release its upcoming and future multimodal AI models](https://www.axios.com/2024/07/17/meta-future-multimodal-ai-models-eu) in the European Union “due to the unpredictable nature of the European regulatory environment,” the company said in a statement shared with Axios. The company’s decision follows Apple, which said in June it would [likely not roll out its new Apple Intelligence and other AI features](https://qz.com/apple-not-release-apple-intelligence-european-union-dma-1851553830) in the bloc due to the Digital Markets Act. Even though Meta’s multimodal models will be under an open license, companies in Europe will not be able to use them over the company’s decision, Axios reported. And companies outside of the bloc could reportedly be blocked from offering products and services on the continent that use Meta’s models. However, Meta has a larger, text-only version of its Llama 3 model that will be made available in the EU when it’s released, the company told Axios. In June, Meta said it would [delay training](https://about.fb.com/news/2024/06/building-ai-technology-for-europeans-in-a-transparent-and-responsible-way/) its [large language models](https://qz.com/ai-artificial-intelligence-glossary-vocabulary-terms-1851422473) on public data from Facebook and Instagram users in the European Union after facing pushback from the Irish Data Protection Commission (DPC). “This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe,” Meta said.
2024-07-23
  • Jul 23, 2024 11:05 AM The newest version of Llama will make AI more accessible and customizable, but it will also stir up debate over the dangers of releasing AI without guardrails. ![Photo of Meta CEO Mark Zuckerberg delivering a speech.](https://media.wired.com/photos/669ec6470d8bbfc56a6384e8/master/w_2560%2Cc_limit/Meta%2520Launches%2520Llama%25203_h_27.RTSO8SJ0.jpg) Photograph: Carlos Barria/Reuters/Redux Most tech moguls hope to sell [artificial intelligence](https://www.wired.com/tag/artificial-intelligence/) to the masses. But Mark Zuckerberg is giving away what Meta considers to be one of the world’s best AI models for free. Meta released the biggest, most capable version of a large language model called [Llama](https://www.wired.com/story/metas-open-source-llama-3-nipping-at-openais-heels/) on Monday, free of charge. Meta has not disclosed the cost of developing Llama 3.1, but Zuckerberg [recently told investors](https://investor.fb.com/investor-news/press-release-details/2024/Meta-Reports-First-Quarter-2024-Results/default.aspx) that his company is spending billions on AI development. Through this latest release, Meta is showing that the closed approach favored by most AI companies is not the only way to develop AI. But the company is also putting itself at the center of debate over the dangers posed by releasing AI without controls. Meta trains Llama in a way that prevents the model from producing harmful output by default, but the model can be modified to remove such safeguards. Meta says that Llama 3.1 is as clever and useful as the best commercial offerings from companies like [OpenAI](https://www.wired.com/tag/openai/), [Google](https://www.wired.com/tag/google/), and [Anthropic](https://www.wired.com/story/anthropic-black-box-ai-research-neurons-features/). In certain benchmarks that measure progress in AI, Meta says the model is the smartest AI on Earth. “It’s very exciting,” says [Percy Liang](https://cs.stanford.edu/~pliang/), an associate professor at Stanford University who tracks open source AI. If developers find the new model to be just as capable as the industry’s leading ones, including [OpenAI’s GPT-4o](https://www.wired.com/story/openai-gpt-4o-model-gives-chatgpt-a-snappy-flirty-upgrade/), Liang says, it could see many move over to Meta’s offering. “It will be interesting to see how the usage shifts,” he says. In an [open letter](https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/) posted with the release of the new model, Meta CEO Zuckerberg compared Llama to the open source [Linux](https://www.wired.com/tag/linux/) operating system. When Linux took off in the late '90s and early 2000s many big tech companies were invested in closed alternatives and criticized open source software as risky and unreliable. Today however Linux is widely used in cloud computing and serves as the core of the Android mobile OS. “I believe that AI will develop in a similar way,” Zuckerberg writes in his letter. “Today, several tech companies are developing leading closed models. But open source is quickly closing the gap.” However, Meta’s decision to give away its AI is not devoid of self-interest. Previous [Llama releases](https://www.wired.com/story/metas-open-source-llama-upsets-the-ai-horse-race/) have helped the company secure an influential position among AI researchers, developers, and startups. Liang also notes that Llama 3.1 is not truly open source, because Meta imposes restrictions on its usage—for example, limiting the scale at which the model can be used in commercial products. The new version of Llama has 405 billion parameters or tweakable elements. Meta has already released two smaller versions of Llama 3, one with 70 billion parameters and another with 8 billion. Meta today also released upgraded versions of these models branded as Llama 3.1. Llama 3.1 is too big to be run on a regular computer, but Meta says that many cloud providers, including Databricks, Groq, AWS, and Google Cloud, will offer hosting options to allow developers to run custom versions of the model. The model can also be accessed at [Meta.ai](https://www.meta.ai/). Some developers say the new Llama release could have broad implications for AI development. [Stella Biderman](https://www.stellabiderman.com/), executive director of [EleutherAI](https://www.eleuther.ai/), an open source AI project, also notes that Llama 3 is not fully open source. But Biderman notes that a change to Meta’s latest license will let developers train their own models using Llama 3, something that most AI companies currently prohibit. “This is a really, really big deal,” Biderman says. Unlike OpenAI and Google’s latest models, Llama is not “multimodal,” meaning it is not built to handle images, audio, and video. But Meta says the model is significantly better at using other software such as a web browser, something that many researchers and companies [believe could make AI more useful](https://www.wired.com/story/fast-forward-forget-chatbots-ai-agents-are-the-future/). After OpenAI released ChatGPT in late 2022, [some AI experts called for a moratorium](https://www.wired.com/story/chatgpt-pause-ai-experiments-open-letter/) on AI development for fear that the technology could be misused or too powerful to control. Existential alarm has since cooled, but many experts remain concerned that unrestricted AI models could be misused by hackers or used to speed up the development of biological or chemical weapons. "Cyber criminals everywhere will be delighted,” says Geoffrey Hinton, a Turing award winner whose pioneering work on a field of machine learning known as deep learning laid the groundwork for large language models. Hinton joined Google in 2013 but [left the company last year](https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dangers/) to speak out about the possible risks that might come with more advanced AI models. He says that AI is fundamentally different from open source software because models cannot be scrutinized in the same way. “People fine-tune models for their own purposes, and some of those purposes are very bad," he adds. Meta has helped allay some fears by releasing previous versions of Llama carefully. The company says it puts Llama through rigorous safety testing before release, and adds that there is little evidence that its models make it easier to develop weapons. Meta said it will release several new tools to help developers keep Llama models safe by moderating their output and blocking attempts to break restrictions. Jon Carvill, a spokesman for Meta, says the company will decide on a case-by-case basis whether to release future models. Dan Hendrycks, a computer scientist and the director of the [Center for AI Safety](https://www.safe.ai/), a nonprofit organization focused on AI dangers, says Meta has generally done a good job of testing its models before releasing them. He says that the new model could help experts understand future risks. “Today’s Llama 3 release will enable researchers outside big tech companies to conduct much-needed AI safety research.”
  • Meta has [released Llama 3.1](https://llama.meta.com/), its largest open-source AI model to date, in a move that challenges the closed approaches of competitors like OpenAI and Google. The new model, [boasting 405 billion parameters](https://ai.meta.com/blog/meta-llama-3-1/), is claimed by Meta to outperform GPT-4o and Claude 3.5 Sonnet on several benchmarks, with CEO Mark Zuckerberg predicting that Meta AI will become the most widely used assistant by year-end. Llama 3.1, which Meta says was trained using over 16,000 Nvidia H100 GPUs, is being made available to developers through partnerships with major tech companies including Microsoft, Amazon, and Google, potentially reducing deployment costs compared to proprietary alternatives. The release includes smaller versions with 70 billion and 8 billion parameters, and Meta is introducing new safety tools to help developers moderate the model's output. While Meta isn't disclosing what all data it used to train its models, the company confirmed it used synthetic data to enhance the model's capabilities. The company is also expanding its Meta AI assistant, powered by Llama 3.1, to support additional languages and integrate with its various platforms, including WhatsApp, Instagram, and Facebook, as well as its Quest virtual reality headset.
2024-08-23
  • 174819736 story [![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook)[![Technology](//a.fsdn.com/sd/topics/technology_64.png)](//tech.slashdot.org/index2.pl?fhfilter=technology) Posted by msmash on Friday August 23, 2024 @12:02PM from the tough-luck dept. Meta Platforms has [canceled plans for a premium mixed-reality headset](https://www.theinformation.com/articles/meta-cancels-high-end-mixed-reality-headset) intended to compete with Apple's Vision Pro, _The Information_ reported Friday, citing sources. From the report: _Meta told employees at the company's Reality Labs division to stop work on the device this week after a product review meeting attended by Meta CEO Mark Zuckerberg, Chief Technology Officer Andrew Bosworth and other Meta executives, the employees said. The axed device, which was internally code-named La Jolla, began development in November and was scheduled for release in 2027, according to current and former Meta employees. It was going to contain ultrahigh-resolution screens known as micro OLEDs -- the same display technology used in Apple's Vision Pro._
2024-09-17
  • [Meta](https://www.fastcompany.com/91190917/restarts-plans-to-train-ai-with-uk-user-data-facebook-instagram-content-social-media-activity) said it’s banning the Russia state media organization from its social media platforms, alleging that the outlets used deceptive tactics to amplify Moscow’s propaganda. The announcement drew a rebuke from the Kremlin on Tuesday. The company, which owns Facebook, WhatsApp, and Instagram, said late Monday that it will roll out the ban over the next few days in an escalation of its [efforts to counter Russia’s covert influence operations](https://www.fastcompany.com/90725896/meta-will-expand-lock-your-profile-protections-to-russian-facebook-users). “After careful consideration, we expanded our ongoing enforcement against Russian state media outlets: Rossiya Segodnya, RT, and other related entities are now banned from our apps globally for foreign interference activity,” Meta said in a prepared statement. Kremlin spokesman Dmitry Peskov lashed out, saying that “such selective actions against Russian media are unacceptable,” and that “Meta with these actions are discrediting themselves. “We have an extremely negative attitude towards this. And this, of course, complicates the prospects for normalizing our relations with Meta,” Peskov told reporters during his daily conference call. RT, formerly known as Russia Today, and Russia Segodnya, also denounced the move. “It’s cute how there’s a competition in the West—who can try to spank RT the hardest, in order to make themselves look better,” RT said in a release. Expand to continue reading ↓
2024-09-25
  • 175131917 story [![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook)[![Technology](//a.fsdn.com/sd/topics/technology_64.png)](//tech.slashdot.org/index2.pl?fhfilter=technology) Posted by msmash on Wednesday September 25, 2024 @02:02PM from the pushing-the-limits dept. Meta [unveiled prototype AR glasses codenamed Orion](https://www.theverge.com/24253908/meta-orion-ar-glasses-demo-mark-zuckerberg-interview) on Wednesday, featuring a 70-degree field of view, Micro LED projectors, and silicon carbide lenses that beam graphics directly into the wearer's eyes. In an interview with The Verge, CEO Mark Zuckerberg demonstrated the device's capabilities, including ingredient recognition, holographic gaming, and video calling, controlled by a neural wristband that interprets hand gestures through electromyography. Despite technological advances, Meta has shelved Orion's commercial release, citing manufacturing complexities and costs reaching $10,000 per unit, primarily due to difficulties in producing the silicon carbide lenses. The company now aims to launch a refined, more affordable version in coming years, with executives hinting at a price comparable to high-end smartphones and laptops. Zuckerberg views AR glasses as critical to Meta's future, potentially freeing the company from its reliance on smartphone platforms controlled by Apple and Google. The push into AR hardware comes as tech giants and startups intensify competition in the space, with Apple launching Vision Pro and Google partnering with Magic Leap and Samsung on headset development.
2024-10-04
  • Oct 4, 2024 9:00 AM The next frontier in generative AI is video—and with Movie Gen, Meta has now staked its claim. An AI-generated video made from the prompt "A baby hippo swimming in the river. Colorful flowers float at the surface, as fish swim around the hippo. The hippo's skin is smooth and shiny, reflecting the sunlight that filters through the water."Courtesy of Meta Meta just announced its own media-focused [AI model](https://www.wired.com/tag/artificial-intelligence), called Movie Gen, that can be used to generate realistic video and audioclips. The company shared multiple 10-second clips generated with [Movie Gen](https://ai.meta.com/blog/movie-gen-media-foundation-models-generative-ai-video/), including a Moo Deng-esque baby hippo swimming around, to demonstrate its capabilities. While the tool is not yet available for use, this Movie Gen announcement comes shortly after its Meta Connect event, which showcased new and [refreshed hardware](https://www.wired.com/story/meta-quest-3s-headset/) and the latest version of its [large language model, Llama 3.2](https://www.wired.com/story/meta-releases-new-llama-model-ai-voice/). Going beyond the generation of straightforward [text-to-video](https://www.wired.com/story/text-to-video-ai-generators-filmmaking-hollywood/) clips, the Movie Gen model can make targeted edits to an existing clip, like adding an object into someone’s hands or changing the appearance of a surface. In one of the example videos from Meta, a woman wearing a VR headset was transformed to look like she was wearing steampunk binoculars. An AI-generated video made from the prompt "make me a painter." Courtesy of Meta An AI-generated video made from the prompt "a woman DJ spins records. She is wearing a pink jacket and giant headphones. There is a cheetah next to the woman." Courtesy of Meta Audio bites can be generated alongside the videos with Movie Gen. In the sample clips, an AI man stands near a waterfall with audible splashes and the hopeful sounds of a symphony; the engine of a sports car purrs and tires screech as it zips around the track, and a snake slides along the jungle floor, accompanied by suspenseful horns. Meta shared some further details about Movie Gen in a research paper released Friday. Movie Gen Video consists of 30 billion parameters, while Movie Gen Audio consists of 13 billion parameters. (A model's parameter count roughly corresponds to how capable it is; by contrast, the largest variant of [Llama 3.1 has 405 billion parameters](https://www.wired.com/story/meta-ai-llama-3/).) Movie Gen can produce high-definition videos up to 16 seconds long, and Meta claims that it outperforms competitive models in overall video quality. Earlier this year, CEO Mark Zuckerberg demonstrated Meta AI’s Imagine Me feature, where users can upload a photo of themselves and role-play their face into multiple scenarios, by posting an AI image of himself [drowning in gold chains](https://www.threads.net/@zuck/post/C9xxwZZyx5B?xmt=AQGzXnHzmnMqrWb6E16MB7-sBjd7WYocg9yooqdOatxWQg) on Threads. A video version of a similar feature is possible with the Movie Gen model—think of it as a kind of [ElfYourself](https://www.wired.com/2007/12/geekdad-mashup/) on steroids. What information has Movie Gen been trained on? The specifics aren’t clear in Meta’s announcement post: “We’ve trained these models on a combination of licensed and publicly available data sets.” The [sources of training data](https://www.wired.com/story/youtube-training-data-apple-nvidia-anthropic/) and [what’s fair to scrape from the web](https://www.wired.com/story/perplexity-is-a-bullshit-machine/) remain a contentious issue for generative AI tools, and it's rarely ever public knowledge what text, video, or audioclips were used to create any of the major models. It will be interesting to see how long it takes Meta to make Movie Gen broadly available. The announcement blog vaguely gestures at a “potential future release.” For comparison, OpenAI announced its [AI video model, called Sora](https://www.wired.com/story/openai-sora-generative-ai-video/), earlier this year and has not yet made it available to the public or shared any upcoming release date (though WIRED did receive a few exclusive Sora clips from the company for an [investigation into bias](https://www.wired.com/story/artificial-intelligence-lgbtq-representation-openai-sora/)). Considering Meta’s legacy as a social media company, it’s possible that tools powered by Movie Gen will start popping up, eventually, inside of Facebook, Instagram, and WhatsApp. In September, competitor Google shared plans to make aspects of its Veo video model [available to creators](https://www.wired.com/story/generative-ai-tools-youtube-shorts-veo/) inside its YouTube Shorts sometime next year. While larger tech companies are still holding off on fully releasing video models to the public, you are able to experiment with AI video tools right now from smaller, upcoming startups, like [Runway](https://runwayml.com/) and [Pika](https://pika.art/home). Give Pikaffects a whirl if you’ve ever been curious what it would be like to see yourself [cartoonishly crushed](https://www.threads.net/@crumbler/post/DAokPKetoMh?xmt=AQGzNNS-5u820OA0WpsHTIxnoDiVH50L_OwMbOEw2V9DLA) with a hydraulic press or suddenly melt in a puddle.
2024-10-21
  • Facebook owner [Meta](https://www.fastcompany.com/91211773/meta-platforms-2024-layoffs-reality-labs-instagram-whatsapp-year-of-efficiency) said on Friday it was releasing a batch of new [AI](https://www.fastcompany.com/91206477/meta-ai-chatbot-brazil-uk-chatgpt) models from its research division, including a “Self-Taught Evaluator” that may offer a path toward less human involvement in the AI development process. The release follows Meta’s introduction of the tool in an August paper, which detailed how it relies upon the same “chain of thought” technique used by OpenAI’s recently released o1 models to get it to make reliable judgments about models’ responses. That technique involves breaking down complex problems into smaller logical steps and appears to improve the accuracy of responses on challenging problems in subjects like science, coding and math. Meta’s researchers used entirely AI-generated data to train the evaluator model, eliminating human input at that stage as well. The ability to use AI to evaluate AI reliably offers a glimpse at a possible pathway toward building autonomous AI agents that can learn from their own mistakes, two of the Meta researchers behind the project told Reuters. Many in the AI field envision such agents as digital assistants intelligent enough to carry out a vast array of tasks without human intervention. Self-improving models could cut out the need for an often expensive and inefficient process used today called Reinforcement Learning from Human Feedback, which requires input from human annotators who must have specialized expertise to label data accurately and verify that answers to complex math and writing queries are correct. Expand to continue reading ↓
2024-11-01
  • ![Image for article titled ChatGPT takes on Google, Meta's spending spree, and Microsoft's data center problem: AI news roundup](https://i.kinja-img.com/image/upload/c_fit,q_60,w_645/4a4299d65d73ae12ab5fe19d56ebe82e.jpg) Michael Hunter, an Atlanta-based real estate marketing professional and Apple ([AAPL](https://qz.com/quote/AAPL)) power user, has watched Apple’s new Apple Intelligence features evolve from promising to problematic. After a month with iOS 18.1's early release through his developer account, Hunter was impressed by the system’s enhanced Siri capabilities and responsiveness. [Read More](https://qz.com/apple-intelligence-ai-iphone-beta-features-siri-users-1851684957)
2025-01-15
  • Unlock seamless, secure login experiences with [**Auth0**](https://auth0.com/signup?utm_source=sourceforge&utm_campaign=global_mult_mult_all_ciam-dev_dg-plg_auth0_display_sourceforge_banner_3p_PLG-SFSiteSearchBanner_utm2&utm_medium=cpc&utm_id=aNK4z000000UIV7GAO)—where authentication meets innovation. Scale your business confidently with flexible, developer-friendly tools built to protect your users and data. [**Try for FREE here**](https://auth0.com/signup?utm_source=sourceforge&utm_campaign=global_mult_mult_all_ciam-dev_dg-plg_auth0_display_sourceforge_banner_3p_PLG-SFSiteSearchBanner_utm2&utm_medium=cpc&utm_id=aNK4z000000UIV7GAO) × 175920375 story [![AI](//a.fsdn.com/sd/topics/ai_64.png)](//tech.slashdot.org/index2.pl?fhfilter=ai)[![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook) Posted by msmash on Wednesday January 15, 2025 @01:01PM from the how-about-that dept. Executives and researchers leading Meta's AI efforts [obsessed over beating OpenAI's GPT-4 model](https://techcrunch.com/2025/01/14/meta-execs-obsessed-over-beating-openais-gpt-4-internally-court-filings-reveal/) while developing Llama 3, according to internal messages unsealed by a court in one of the company's ongoing AI copyright cases, Kadrey v. Meta. From a report: _"Honestly... Our goal needs to be GPT-4," said Meta's VP of Generative AI, Ahmad Al-Dahle, in an October 2023 message to Meta researcher Hugo Touvron. "We have 64k GPUs coming! We need to learn how to build frontier and win this race." Though Meta releases open AI models, the company's AI leaders were far more focused on beating competitors that don't typically release their model's weights, like Anthropic and OpenAI, and instead gate them behind an API. Meta's execs and researchers held up Anthropic's Claude and OpenAI's GPT-4 as a gold standard to work toward. The French AI startup Mistral, one of the biggest open competitors to Meta, was mentioned several times in the internal messages, but the tone was dismissive. "Mistral is peanuts for us," Al-Dahle said in a message. "We should be able to do better," he said later._
2025-01-24
  • Threads, Meta’s rival to X and Bluesky, is testing ads with certain brands in the United States and Japan, the company said Friday. “We know there will be plenty of feedback about how we should approach ads, and we are making sure they feel like Threads posts you’d find relevant and interesting,” Instagram head Adam Mosseri [said in a post](https://www.threads.net/@mosseri/post/DFN0dSVhL26). He added that the team will be monitoring the test “before scaling it more broadly.” The ads will show a “Sponsored” label as they appear in users’ feeds. Meta launched Threads in 2023 and has been focusing on growing its user base and keeping people logged on. Now that it has more than [300 million monthly active users](https://www.threads.net/@zuck/post/DDqBLlMyIGD) (with more than 100 million of those using it daily), better monetization efforts appear to be the next step. After all, social media is just one big way to turn eyeballs into revenue. Meta Platforms, parent company of Facebook, Instagram, and WhatsApp, is likely to share an update about Threads when it [reports fourth-quarter 2024 earnings](https://investor.atmeta.com/investor-news/press-release-details/2025/Meta-to-Announce-Fourth-Quarter-and-Full-Year-2024-Results/default.aspx) next week. Its stock on Friday afternoon was trading at near-record highs. Responses to Mosseri’s post announcing the test revealed frustration from some users. “You put in ads, there will be no reason to stay . . . ,” one user wrote. “I’ll leave the minute the ads start rolling by. Guaranteed.” Expand to continue reading ↓
2025-01-29
  • [BABA+1.03%](https://qz.com/quote/BABA)[META+0.60%](https://qz.com/quote/META)[NVDA\-5.05%](https://qz.com/quote/NVDA) Days after Chinese artificial intelligence startup DeepSeek sparked a [global tech stock sell-off](https://qz.com/nasdaq-nvidia-tech-stocks-deepseek-ai-djia-sp500-1851748172?_gl=1*10076yc*_ga*MjMyMTcyODYuMTcwNzE2NTQ5Mg..*_ga_V4QNJTT5L0*MTczODE2NjQ1OS40OTkuMS4xNzM4MTY4MjY1LjIxLjAuMA..), a homegrown rival said its new AI model performed even better. Alibaba Cloud ([BABA+1.03%](https://qz.com/quote/BABA)) released an upgraded version of its [flagship AI model, Qwen2.5-Max](https://mp.weixin.qq.com/s/hP-r8h-LliFUPYKbd3lkUQ), that performed better than top open-source competitors, including DeepSeek’s V3 model and Meta’s ([META+0.60%](https://qz.com/quote/META)) Llama 3.1 model on various benchmarks, according to [results](https://mp.weixin.qq.com/s/hP-r8h-LliFUPYKbd3lkUQ) published by the firm on WeChat. The cloud computing subsidiary of Alibaba Group also found its Qwen2.5-Max showed comparable performance to OpenAI’s GPT-4 and Anthropic’s Claude 3.5 Sonnet — both closed-source models. The Chinese firm said its AI model “has demonstrated world-leading model performance in mainstream authoritative benchmarks,” including the Massive Multitask Language Understanding (MMLU), which evaluates general knowledge, and LiveCodeBench, which tests coding skills. The Qwen2.5-Max announcement follows DeepSeek’s launch of its [first-generation reasoning models, DeepSeek-R1](https://qz.com/china-ai-startup-deepseek-r1-v3-openai-reasoning-model-1851748222), last week, which demonstrated comparable performance to OpenAI’s reasoning models, O1-mini and O1, on several industry benchmarks, according to its [technical paper](https://api-docs.deepseek.com/news/news250120). The release of DeepSeek-R1 prompted Nasdaq, Dow Jones Industrial Average, and S&P 500 futures [to fall](https://qz.com/nasdaq-nvidia-tech-stocks-deepseek-ai-djia-sp500-1851748172?_gl=1*10076yc*_ga*MjMyMTcyODYuMTcwNzE2NTQ5Mg..*_ga_V4QNJTT5L0*MTczODE2NjQ1OS40OTkuMS4xNzM4MTY4MjY1LjIxLjAuMA..) Monday morning. Nvidia’s ([NVDA\-5.05%](https://qz.com/quote/NVDA)) shares [plunged 17%](https://qz.com/nvidia-deepseek-r1-ai-model-chips-stock-rout-china-us-1851748667?_gl=1*ne4vve*_ga*MjMyMTcyODYuMTcwNzE2NTQ5Mg..*_ga_V4QNJTT5L0*MTczODE2NjQ1OS40OTkuMS4xNzM4MTY4MjY1LjIxLjAuMA..), wiping out nearly $600 billion in value — a record loss for a U.S. company. Investors were spooked by the DeepSeek-R1 launch, which comes after the December release of DeepSeek-V3. While Alibaba Cloud hasn’t disclosed its development costs, DeepSeek’s [claim that it built its model](https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf) for just $5.6 million using Nvidia’s reduced-capability graphics processing units has caught the market’s attention, challenging assumptions about the [massive investments needed for AI development](https://qz.com/stargate-ai-infrastructure-data-center-trump-openai-1851744873). According to the technical paper, DeepSeek used a cluster of just under 2,050 Nvidia H800 chips for training its V3 model — a less powerful version of the chipmaker’s H100 that it is allowed to sell to Chinese firms under U.S. chip restrictions. The cluster is also much smaller than the [tens of thousands of chips](https://developer.nvidia.com/blog/supercharging-llama-3-1-across-nvidia-platforms) U.S. firms are using to train similarly-sized models. DeepSeek’s release has called [Big Tech’s tens of billions in spending on AI](https://qz.com/tech-earnings-meta-microsoft-apple-deepseek-ai-nvidia-1851749011?_gl=1*1uj6qzz*_ga*MjMyMTcyODYuMTcwNzE2NTQ5Mg..*_ga_V4QNJTT5L0*MTczODE2NjQ1OS40OTkuMS4xNzM4MTY3NDY5LjYwLjAuMA..) into question ahead of a slate of earnings results, as well as the effectiveness of U.S. efforts to curb advanced chips from entering the country.
2025-02-05
  • He has reason to be optimistic, though: Meta is currently ahead of its competition thanks to the success of the Ray-Ban Meta smart glasses—the company sold [more than 1 million units](https://www.theverge.com/meta/603674/meta-ray-ban-smart-glasses-sales) last year. It also is preparing to roll out new styles thanks to a partnership with Oakley, which, like Ray-Ban, is under the EssilorLuxottica umbrella of brands. And while its current second-generation specs can’t show its wearer digital data and notifications, a third version complete with a small display is due for release this year, according to the [_Financial Times_](https://www.ft.com/content/77bd9117-0a2d-4bd7-9248-4dd288f695a4). The company is also reportedly working on a lighter, more advanced version of its Orion AR glasses, dubbed Artemis, that could go on sale as early as 2027, [_Bloomberg_](https://www.bloomberg.com/news/articles/2025-01-21/meta-hardware-plans-oakley-and-ar-like-glasses-apple-watch-and-airpods-rivals?sref=E9Urfma4) reports. Adding display capabilities will put the Ray-Ban Meta glasses on equal footing with Google’s unnamed Android XR glasses project, which sports an [in-lens display](https://blog.google/products/android/android-xr/) (the company has not yet announced a definite release date). The prototype the company demoed to journalists in September featured a version of its AI chatbot Gemini, and much they way Google built its Android OS to run on smartphones made by third parties, its Android XR software will eventually run on smart glasses made by other companies as well as its own. These two major players are competing to bring face-mounted AI to the masses in a race that’s bound to intensify, adds Rosenberg—especially given that both [Zuckerberg](https://s21.q4cdn.com/399680738/files/doc_financials/2024/q4/META-Q4-2024-Earnings-Call-Transcript.pdf) and Google cofounder [Sergey Brin](https://www.businessinsider.com/sergey-brin-google-glass-ai-killer-app-comments-project-astra-2024-5) have called smart glasses the “perfect” hardware for AI. “Google and Meta are really the big tech companies that are furthest ahead in the AI space on their own. They’re very well positioned,” he says. “This is not just augmenting your world, it’s augmenting your brain.” When the AR gaming company Niantic’s Michael Miller walked around CES, the gigantic consumer electronics exhibition that takes over Las Vegas each January, he says he was struck by the number of smaller companies developing their own glasses and systems to run on them, including Chinese brands DreamSmart, Thunderbird, and Rokid. While it’s still not a cheap endeavor—a business would probably need a couple of million dollars in investment to get a prototype off the ground, he says—it demonstrates that the future of the sector won’t depend on Big Tech alone. “On a hardware and software level, the barrier to entry has become very low,” says Miller, the augmented reality hardware lead at Niantic, which has partnered with Meta, Snap, and Magic Leap, among others. “But turning it into a viable consumer product is still tough. Meta caught the biggest fish in this world, and so they benefit from the Ray-Ban brand. It’s hard to sell glasses when you’re an unknown brand.” That’s why it’s likely ambitious smart glasses makers in countries like Japan and China will increasingly partner with eyewear companies known locally for creating desirable frames, generating momentum in their home markets before expanding elsewhere, he suggests. These smaller players will also have an important role in creating new experiences for wearers of smart glasses. A big part of smart glasses’ usefulness hinges on their ability to send and receive information from a wearer’s smartphone—and third-party developers’ interest in building apps that run on them. The more the public can do with their glasses, the more likely they are to buy them.
2025-03-13
  • Meta ([META\-4.64%](https://qz.com/quote/META)) is rolling out community notes on March 18, taking a page from the playbook of Elon Musk’s X. The incoming feature will ask users to fact-check or clarify claims in popular posts, marking a departure from Meta’s [former fact-checking system](https://www.facebook.com/journalismproject/programs/third-party-fact-checking/selecting-partners), which relied on fact-checking experts. “We won’t be reinventing the wheel. Initially we will use X’s open-source algorithm as the basis of our rating system,” Meta said in a [press release](https://about.fb.com/news/2025/03/testing-begins-community-notes-facebook-instagram-threads/) on Thursday. Twitter introduced community notes under the name Birdwatch in 2021, well before Musk bought the service and rebranded it as X. Users on X already rank other users’ notes, and the most popular response appears directly below posts. Meta said it will launch its similar feature on Facebook, Instagram and Threads, but only within the United States for now. The company eventually intends to roll out the new system globally. Meta added that user-submitted notes won’t actually appear beneath posts until it thinks its system is working properly. Meta first announced that it would retire its third-party fact-checking program in January. At the time, CEO [Mark Zuckerberg said](https://qz.com/meta-fact-check-elon-musk-trump-x-community-notes-1851733906?_gl=1*15p8iii*_ga*MzUxNzY2NjAwLjE3MjAwMTcyMjA.*_ga_V4QNJTT5L0*MTc0MTg3NzMyOS4yMTkuMS4xNzQxODgwODExLjYwLjAuMA..) that the company would replace it with community notes, similar to X, without giving much detail. Meta’s third-party fact-checking program started in 2016, shortly after President Donald Trump won his first election. At the time, Facebook faced criticism for failing to catch election-related misinformation on the platform, including disinformation campaigns led by foreign governments. “We expect Community Notes to be less biased than the third party fact checking program it replaces, and to operate at a greater scale when it is fully up and running,” the company said in the press release, saying the experts in the earlier fact-checking program had political biases that affected their judgement. “Community Notes allow more people with more perspectives to add context to more types of content, and because publishing a note requires agreement between different people, we believe it will be less prone to bias,” Meta said. Separately, Zuckerberg has said the change could also mean that Meta is “going to catch less bad stuff,” per [ABC](https://abcnews.go.com/US/why-did-meta-remove-fact-checkers-experts-explain/story?id=117417445). Meta’s community notes also won’t have penalties associated with them. Under the earlier system, posts that received third-party fact-checking intervention were shown less often on people’s feeds, due to them potentially harboring false and harmful information. That won’t be the case with posts that receive community notes. But X’s crowd-sourced fact-checking has also been deemed ill-equipped for handling misinformation. [Reports](https://apnews.com/article/x-musk-twitter-misinformation-ccdh-0fa4fec0f703369b93be248461e8005d) have found that accurate notes on misleading posts were not displayed 100% of the time, and even when they were, the original post got significantly more views than the correcting note. Meta shared that around 200,000 users have signed up to become Community Notes contributors so far across all three apps, and the waitlist is still open for those who wish to take part. The feature will be available in English, Spanish, Chinese, Vietnamese, French and Portuguese to start before expanding to other languages with time.
2025-04-10
  • **[Earn rates as high as 16% annually with Fixed-term Savings with Nexo.](https://nexo.com/fixed-term-savings?utm_source=sourceforge&utm_medium=fixed&utm_campaign=sourceforge_mb_sponsorship_earn_q225)** × 176994913 story [![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook)[![AI](//a.fsdn.com/sd/topics/ai_64.png)](//tech.slashdot.org/index2.pl?fhfilter=ai) Posted by msmash on Thursday April 10, 2025 @01:00PM from the how-about-that dept. Meta says in its Llama 4 [release announcement](https://ai.meta.com/blog/llama-4-multimodal-intelligence/) that it's specifically [addressing "left-leaning" political bias](https://www.404media.co/facebook-pushes-its-llama-4-ai-model-to-the-right-wants-to-present-both-sides/) in its AI model, distinguishing this effort from traditional bias concerns around race, gender, and nationality that researchers have long documented. "Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue," the company said. "All leading LLMs have had issues with bias -- specifically, they historically have leaned left," Meta stated, framing AI bias primarily as a political problem. The company claims Llama 4 is "dramatically more balanced" in handling sensitive topics and touts its lack of "strong political lean" compared to competitors.
2025-04-23
  • Meta has expanded both the feature set and availability of its Ray-Ban smart glasses. Notable updates [include live translation with offline support](https://www.theverge.com/news/654387/meta-smart-glasses-ray-ban-live-translation-ai) through downloadable language packs, the ability to send messages and make calls via Instagram, and conversations with Meta AI based on real-time visual context. The Verge reports: _Live translation was first teased at Meta Connect 2024 last October, and saw a limited rollout through Meta's Early Access Program in select countries last December. Starting today it's getting a wider rollout to all the markets where the Ray-Ban Meta smart glasses are available. You can hold a conversation with someone who speaks English, French, Italian, or Spanish, and hear a real-time translation through the smart glasses in your preferred language. If you download a language pack in advance, you can use the live translations feature without Wi-Fi or access to a cellular network, making it more convenient to use while traveling abroad. Meta also highlighted a few other features that are still enroute or getting an expanded release. Live AI, which allows the Meta AI smart assistant to continuously see what you do for more natural conversations is now "coming soon to general availability in the US and Canada." The ability to "send and receive direct messages, photos, audio calls, and video calls from Instagram on your glasses," similar to functionality already available through WhatsApp, Messenger, and iOS and Android's native messaging apps, is coming soon as well. Access to music apps like Spotify, Amazon Music, Shazam, and Apple Music is starting to expand beyond the US and Canada, Meta says. However, asking Meta AI to play music, or for more information about what you're listening to, will still only be available to those with their "default language is set to English." _