2024-02-05
-
An anonymous reader quotes a report from TechCrunch: _The recent surprise announcement that Meta will soon be [shutting down its Facebook Groups API](https://techcrunch.com/2024/02/05/meta-cuts-off-third-party-access-to-facebook-groups-leaving-developers-and-customers-in-disarray/) is throwing some businesses and social media marketers into disarray. On January 23, Meta [announced](https://developers.facebook.com/blog/post/2024/01/23/introducing-facebook-graph-and-marketing-api-v19/) the release of its Facebook Graph API v19.0, which included the news that the company would be deprecating its existing Facebook Groups API. The latter, which is used by developers and businesses to schedule posts to Facebook Groups, will be removed within 90 days, Meta said. This includes all the Permissions and Reviewable Features associated with the API, it also noted._ _Meta explained that a major use case for the API was a feature that [allowed developers to privately reply](https://developers.facebook.com/docs/messenger-platform/discovery/private-replies/) in Facebook Groups. For example, a small business that wanted to send a single message to a person who posted on their Facebook Group or who had commented in the group could be messaged through the API. However, Meta said that another change in the new v19.0 API would enable this feature, without the need for the Groups API. But developers told TechCrunch that the shutdown of the API would cause problems for companies that offer solutions to customers who want to schedule and automate their social media posts. \[...\]_ _ What's more, developers tell us that Meta's motivation behind the API's shutdown is unclear. On the one hand, it could be that Facebook Groups don't generate ad revenue and the shutdown of the API will leave developers without a workaround. But Meta hasn't clarified if that's the case. Instead, Meta's blog post only mentioned one use case that would be addressed through the new v.19.0 API. \[...\] On Meta's forum for developers, one developer says they're "pretty shocked" by the company's announcement, noting their app relies on the Groups API and will essentially no longer work when the shutdown occurs. Others are frustrated that Meta hasn't clearly explained if posting on Groups will be done with a Page Access token going forward, as the way the announcement is worded it seems that part is only relevant for those posting private replies, not posting to the group as a whole. \[...\] the whole thing could just be some messaging mistake -- like Meta perhaps forgot to include the part where it was going to note what its new solution would be. There is concern, however, that Meta is deprioritizing developers' interests having recently shut down its developer bug portal as well. _
2024-02-28
-
173175616 story [![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook)[![AI](//a.fsdn.com/sd/topics/ai_64.png)](//tech.slashdot.org/index2.pl?fhfilter=ai) Posted by msmash on Wednesday February 28, 2024 @10:21AM from the up-next dept. An anonymous reader [shares a report](https://www.theinformation.com/articles/meta-wants-llama-3-to-handle-contentious-questions-as-google-grapples-with-gemini-backlash) (paywalled): _As Google [grapples with the backlash](https://tech.slashdot.org/story/24/02/28/0633233/google-ceo-calls-ai-tools-controversial-responses-completely-unacceptable) over the historically inaccurate responses on its Gemini chatbot, Meta Platforms is dealing with a related issue. As part of its work on the forthcoming version of its large language model, Llama 3, Meta is trying to overcome a problem perceived in Llama 2: Its answers to anything at all contentious aren't helpful. Safeguards added to Llama 2, which Meta released last July and which powers the artificial intelligence assistant in its apps, prevent the LLM from answering a broad range of questions deemed controversial. These guardrails have made Llama 2 appear too "safe" in the eyes of Meta's senior leadership, as well as among some researchers who worked on the model itself, according to people who work at Meta. \[...\] Meta's conservative approach with Llama 2 was designed to ward off any public relations disasters, said the people who work at Meta. But researchers are now trying to loosen up Llama 3 so it engages more with users when they ask about difficult topics, offering context rather than just shutting down tricky questions, said two of the people who work at Meta. The new version of the model will in theory be able to better distinguish when a word has multiple meanings. For example, Llama 3 might understand that a question about how to kill a vehicle's engine means asking how to shut it off rather than end its life. Meta also plans to appoint someone internally in the coming weeks to oversee tone and safety training as part of its efforts to make the model's responses more nuanced, said one of the people. The company plans to release Llama 3 in July, though the timeline could still change, they added._
2024-04-09
-
173480822 story [![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook)[![AI](//a.fsdn.com/sd/topics/ai_64.png)](//tech.slashdot.org/index2.pl?fhfilter=ai) Posted by msmash on Tuesday April 09, 2024 @12:41PM from the up-next dept. Meta Platforms is planning to launch two small versions of its forthcoming Llama 3 large-language model next week, _The Information_ [has reported](https://www.theinformation.com/articles/meta-platforms-to-launch-small-versions-of-llama-3-next-week) _\[[non-paywalled link](https://www.theverge.com/2024/4/9/24125217/meta-llama-smaller-lightweight-model-ai)\]_. From the report: _The models will serve as a precursor to the launch of the biggest version of Llama 3, expected this summer. Release of the two small models will likely help spark excitement for the forthcoming Llama 3, which will be coming out roughly a year after Llama 2 launched last July. It comes as several companies, including Google, Elon Musk's xAI and Mistral, have released open-source LLMs. Meta hopes Llama 3 will catch up with OpenAI's GPT-4, which can answer questions based on images users upload to the chatbot. The biggest version will be multimodal, which means it will be capable of understanding and generating both texts and images. In contrast, the two small models to be released next week won't be multimodal, the employee said._
2024-04-18
-
Meta Platforms on Thursday released early versions of its latest large language model, Llama 3, and an image generator that updates pictures in real time while users type prompts, as it races to catch up to generative AI market leader [OpenAI](https://www.theguardian.com/technology/openai). The models will be integrated into virtual assistant Meta AI, which the company is pitching as the most sophisticated of its free-to-use peers. The assistant will be given more prominent billing within Meta’s Facebook, Instagram, WhatsApp and Messenger apps as well as a new standalone website that positions it to compete more directly with Microsoft-backed OpenAI’s breakout hit [ChatGPT](https://www.theguardian.com/technology/chatgpt). The announcement comes as [Meta](https://www.theguardian.com/technology/meta) has been scrambling to push generative AI products out to its billions of users to challenge OpenAI’s leading position on the technology, involving an overhaul of computing infrastructure and the consolidation of previously distinct research and product teams. The social media giant equipped Llama 3 with new computer coding capabilities and fed it images as well as text this time, though for now the model will output only text, Chris Cox, Meta’s chief product officer, said in an interview. More advanced reasoning, like the ability to craft longer multi-step plans, will follow in subsequent versions, he added. Versions planned for release in the coming months will also be capable of “multimodality”, meaning they can generate both text and images, Meta said in blog posts. “The goal eventually is to help take things off your plate, just help make your life easier, whether it’s interacting with businesses, whether it’s writing something, whether it’s planning a trip,” Cox said. Cox said the inclusion of images in the training of Llama 3 would enhance an update rolling out this year to the Ray-Ban Meta smart glasses, a partnership with glasses maker EssilorLuxottica, enabling Meta AI to identify objects seen by the wearer and answer questions about them. Meta also announced a new partnership with Alphabet’s Google to include real-time search results in the assistant’s responses, supplementing an existing arrangement with Microsoft’s Bing. The Meta AI assistant is expanding to more than a dozen markets outside the US with the update, including Australia, Canada, Singapore, Nigeria and Pakistan. Meta is “still working on the right way to do this in Europe”, Cox said, where privacy rules are more stringent and the forthcoming AI Act is poised to impose requirements like disclosure of models’ training data. Generative AI models’ voracious need for data has emerged as a major source of tension in the technology’s development. Meta has been releasing models like Llama 3 for free commercial use by developers as part of its catch-up effort, as the success of a powerful free option could stymie rivals’ plans to earn revenue off their proprietary technology. The strategy has also elicited safety concerns from critics wary of what unscrupulous developers may use the model to build. Mark Zuckerberg, Meta CEO, nodded at that competition in a video accompanying the announcement, in which he called Meta AI “the most intelligent AI assistant that you can freely use”. Zuckerberg said the biggest version of Llama 3 is currently being trained with 400bn parameters and is already scoring 85 MMLU, citing metrics used to convey the strength and performance quality of AI models. The two smaller versions rolling out now have 8bn parameters and 70bn parameters, and the latter scored around 82 MMLU, or Massive Multitask Language Understanding, he said. Developers have complained that the previous Llama 2 version of the model failed to understand basic context, confusing queries on how to “kill” a computer program with requests for instructions on committing murder. Rival Google has run into similar problems and recently paused use of its Gemini AI image generation tool after it drew criticism for churning out inaccurate depictions of historical figures. Meta said it cut down on those problems in Llama 3 by using “high quality data” to get the model to recognize nuance. It did not elaborate on the datasets used, although it said it fed seven times the amount of data into Llama 3 than it used for Llama 2 and leveraged “synthetic”, or AI-created, data to strengthen areas like coding and reasoning. Cox said there was “not a major change in posture” in terms of how the company sourced its training data.
2024-04-19
-
Meta is ready to put its AI assistant in the ring with ChatGPT in the chatbot fight. The tech giant said on Thursday that it is bringing [Meta AI to all of its platforms, including Facebook and Instagram](https://about.fb.com/news/2024/04/meta-ai-assistant-built-with-llama-3/), calling it “the most intelligent AI assistant you can use for free.” The AI assistant can be used in platform feeds, chats, and search. Meta also said the AI assistant is faster at generating high quality images, and can “change with every few letters typed,” so users can see it generating their image. Meta AI, which was [introduced in September](https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools/), is now available in English in over a dozen countries outside of the U.S., including Australia, Nigeria, and Singapore. The company also provides Meta AI as a website. The assistant was built with models from [Meta’s latest generative AI model family, Llama 3](https://ai.meta.com/blog/meta-llama-3/), which it also introduced on Thursday. Llama 3 8B has 8 billion parameters — the [variables models learn during training](https://ai-event.ted.com/glossary/parameters), used to make predictions — while Llama 3 70B has 70 billion parameters. Meta said Llama 3 is “a major leap” over its predecessor, Llama 2. The Llama 3 models will be available on the Google Cloud and Microsoft Azure platforms, among others, and supported by hardware from companies including Intel and Nvidia. Llama 3 “demonstrates state-of-the-art performance on a wide range of industry benchmarks,” Meta said, claiming it [outperforms other models, including Google’s Gemini and Anthropic’s Claude 3](https://llama.meta.com/llama3/), in a series of benchmarks. “We believe these are the best open source models of their class, period,” Meta said. The two newest two newest Llama 3 models are only the beginning of Meta’s plans for the family. The company said it is still training models that are over 400 billion parameters. It plans to release multiple models over the coming months with more advancements including multimodality, which is when a model can [understand and generate different types of content](https://www.turing.com/resources/multimodal-llms), including photo and video. Meta has spent billions on chips to build on its AI ambitions. making itself [one of Nvidia’s top customers](https://qz.com/nvidia-generative-ai-google-microsoft-meta-1851206854). In March, Tom Alison, head of Facebook, said at a tech conference that Meta is [developing an AI model to power recommendations for its platforms](https://qz.com/meta-facebook-instagram-ai-video-feed-reels-1851315404) as part of its “technology roadmap” for now until 2026.
2024-04-24
-
Meta stock plummeted more than 16% in after-hours trading Wednesday, even as [the Facebook parent company reported better-than-anticipated sales](https://qz.com/meta-metaverse-facebook-earnings-mark-zuckerberg-1851433524). Meta reported revenues of $36.5 billion for the three months ended March 31, almost 30% higher than the same period last year and ahead of the expectations of Wall Street analysts surveyed by FactSet. And its profits more than doubled to $12 billion. “It’s been a good start to the year,” Meta CEO Mark Zuckerberg said in the company’s earnings release Wednesday. “The new version of Meta AI with Llama 3 is another step towards building the world’s leading AI. We’re seeing healthy growth across our apps and we continue making steady progress building the metaverse as well.” Investors didn’t seem to care about those good fortunes, though, as Meta’s share price sank 16.5% to about $412 in after-hours trading. They cared, instead, about [its lukewarm second quarter outlook](https://qz.com/meta-more-than-doubles-q1-profit-but-revenue-guidance-p-1851433481), which pegged expected revenues for the three months ending June 31 at $36.5 to $39 billion. Still, Meta stock has more than doubled over the last year, up 42.5% so far in 2024 and 137.8% in the last 12 months. The company has been one of the top-performing tech stocks. It’s part of the so-called “Fab Four,” the A-listers of the “Magnificent Seven” big tech stocks that continue to rally even as Google parent Alphabet, Apple, and Tesla have fallen flat or worse. More Meta news -------------- [Meta’s Metaverse is still losing the company billions](https://qz.com/meta-metaverse-facebook-earnings-mark-zuckerberg-1851433524) [Meta’s new AI assistant wanted to answer users’ questions and now it’s getting on their nerves](https://qz.com/meta-ai-tool-users-instagram-chatgpt-1851430125) [Russia has sentenced a Meta executive to 6 years in prison](https://qz.com/facebook-meta-andy-stone-meta-russia-jail-terrorism-1851426978) [Meta just unveiled a new AI chip a day after Google and Intel](https://qz.com/meta-facebook-ai-chip-google-intel-nvidia-1851400247) [Meta’s smart glasses are getting a chatty AI update next month](https://qz.com/meta-upgrading-smart-glasses-with-new-ai-capabilities-1851371595)
2024-04-25
-
Shares in Meta slumped 15% when Wall Street opened on Thursday, wiping about $190bn off the value of the Facebook and Instagram parent company, as investors reacted to a [pledge to ramp up spending](https://www.theguardian.com/technology/2024/apr/24/meta-earnings) on artificial intelligence. Mark Zuckerberg, Meta’s founder and chief executive, said on a conference call on Wednesday that spending on the technology would have to grow “meaningfully” before the company could make “much revenue” from new AI products. Shares in [Meta](https://www.theguardian.com/technology/meta) had been boosted in 2023 by Zuckerberg’s tough action on costs in what he described as a “year of efficiency”. A relaxation of that restraint has rattled investors after Meta raised the upper bound of its capital expenditure guidance on Wednesday, from $37bn to $40bn. Last week, Meta [released Llama 3](https://www.theguardian.com/technology/2024/apr/18/meta-ai-llama3-release), the latest version of its AI model, alongside an image generator that updates pictures in real time while users type prompts. The company’s AI-powered assistant, Meta AI, is expanding to its platforms in more than a dozen markets outside the US with the update, including Australia, Canada, Singapore, Nigeria and Pakistan. Chris Cox, Meta’s chief product officer, said the company was “still working on the right way to do this in Europe”. The share decline follows a record gain in market value by Meta in February, when the company added $196bn to its stock market capitalisation – a measure of a company’s worth – after declaring its first dividend. At the time it was the biggest one-day gain in Wall Street history. However, weeks later, Nvidia, the leading supplier of chips for training and operating AI models, [smashed that record with a $277bn gain](https://www.theguardian.com/business/2024/feb/22/japan-nikkei-european-shares-record-highs-ai-nvidia-stoxx-600).
-
Meta stock fell more than 10% Thursday, even as the Facebook parent company reported better-than-anticipated sales in its quarterly earnings the day before. The losses appeared to be driven by [the company’s steep Metaverse losses](https://qz.com/meta-metaverse-facebook-earnings-mark-zuckerberg-1851433524), and [CEO Mark Zuckerberg’s commitment to continue that spending](https://qz.com/meta-stock-earnings-facebook-wall-street-expectations-1851433671). The stock dropped more than 15% in pre-market trading Thursday before recovering some of those losses to close down 10.5% on the day. Meta reported revenues of $36.5 billion for the three months ended March 31, almost 30% higher than the same period last year and ahead of the expectations of Wall Street analysts surveyed by FactSet. And its profits more than doubled to $12 billion. Earnings per share were $4.71, more than the $4.32 expected Investors didn’t seem to care about those good fortunes, though, as Meta’s share price sank 16.5% in after-hours trading Wednesday and was down 15.3% before markets opened Thursday. They cared, instead, about [its lukewarm second-quarter outlook](https://qz.com/meta-more-than-doubles-q1-profit-but-revenue-guidance-p-1851433481). The company issued light revenue guidance and is expecting revenues for the three months ending June 31 at $36.5 to $39 billion. “It’s been a good start to the year,” Meta CEO Mark Zuckerberg said in the company’s earnings release Wednesday. “The new version of Meta AI with Llama 3 is another step towards building the world’s leading AI. We’re seeing healthy growth across our apps and we continue making steady progress building the metaverse as well.” Meta stock has more than doubled over the last year, [up 39% so far in 2024 and 135% in the last 12 months.](https://www.tipranks.com/stocks/meta) The company has been one of the top-performing tech stocks. It’s part of the so-called “Fab Four,” the A-listers of the “Magnificent Seven” big tech stocks that continue to rally even as Google parent Alphabet, Apple, and Tesla have fallen flat or worse. _–Laura Bratton contributed to this article_
-
Apr 25, 2024 12:00 PM Meta’s decision to give away powerful AI software for free could threaten the business models of OpenAI and Google. ![Digital generated image of layered blue speech bubbles against a blue background](https://media.wired.com/photos/66299d1dc79850606ceb4921/master/w_2560%2Cc_limit/Meta-Llama-3-Fast-Forward-Business.jpg) Illustration: Andriy Onufriyenko/Getty Images Jerome Pesenti has a few reasons to celebrate Meta’s decision last week to [release Llama 3](https://www.wired.com/story/meta-is-already-training-a-more-powerful-sucessor-to-llama-3/), a powerful open source [large language model](https://www.wired.com/story/how-quickly-do-large-language-models-learn-unexpected-skills/) that anyone can download, run, and build on. Pesenti [used to be vice president of](https://www.wired.com/story/facebooks-ai-says-field-hit-wall/) [artificial intelligence](https://www.wired.com/tag/artificial-intelligence/) at [Meta](https://www.wired.com/tag/meta/) and says he often pushed the company to consider releasing its technology for others to use and build on. But his main reason to rejoice is that his new startup will get access to an AI model that he says is very close in power to [OpenAI’s industry-leading text generator GPT-4](https://www.wired.com/story/5-updates-gpt-4-turbo-openai-chatgpt-sam-altman/), but considerably cheaper to run and more open to outside scrutiny and modification. “The release last Friday really feels like a game-changer,” Pesenti says. His new company, [Sizzle](https://www.szl.ai/), an AI tutor, currently uses GPT-4 and other AI models, both closed and open, to craft problem sets and curricula for students. His engineers are evaluating whether Llama 3 could replace OpenAI’s model in many cases. Sizzle’s story may augur a broader shift in the balance of power in AI. OpenAI changed the world with ChatGPT, setting off a wave of AI investment and drawing more than 2 million developers to its cloud APIs. But if open source models prove competitive, developers and entrepreneurs may decide to stop paying to access the latest model from OpenAI or Google and use Llama 3 or one of the other increasingly powerful open source models that are popping up. “It’s going to be an interesting horse race,” Pesenti says of competition between open models like Llama 3 and closed ones such as GPT-4 and Google’s Gemini. Meta’s previous model, Llama 2, was already influential, but the company says it made the latest version more powerful by feeding it larger amounts of higher-quality training data, with new techniques developed to filter out redundant or garbled content and to select the best mixture of datasets to use. Pesenti says running Llama 3 on a cloud platform such as [Fireworks.ai](http://www.fireworks.ai/) costs just a 20th of the cost of accessing GPT-4 through an API. He adds that Llama 3 can be configured to respond to queries extremely quickly, a key consideration for developers at companies like his that rely on tapping into models from different providers. “It's an equation between latency, cost, and accuracy,” he says. Open models appear to be dropping at an impressive clip. A couple of weeks ago, I went inside startup Databricks [to witness the final stages of an effort to build DBRX](https://www.wired.com/story/dbrx-inside-the-creation-of-the-worlds-most-powerful-open-source-ai-model/), a language model built that was briefly the best open one around. That crown is now Llama 3’s. Ali Ghodsi, CEO of Databricks, also describes Llama 3 as “game-changing” and says the larger model “is approaching the quality of GPT 4—that levels the playing field between open and closed-source LLMs.” Llama 3 also showcases the potential for making AI models smaller, so they can be run on less powerful hardware. Meta released two versions of its latest model, one with 70 billion parameters—a measure of the variables it uses to learn from training data—and another with 8 billion. The smaller model is compact enough to run on a laptop but is remarkably capable, at least in WIRED’s testing. Two days before Meta’s release, [Mistral](https://mistral.ai/), a French AI company founded by alumni of Pesenti’s team at Meta, [open sourced](https://mistral.ai/news/mixtral-8x22b/) Mixtral 8x22B. It has 141 billion parameters but uses only 39 billion of them at any one time, a design known as a mixture of experts. Thanks to this trick, the model is considerably more capable than some models that are much larger. Meta isn’t the only tech giant releasing open source AI. This week Microsoft released [Phi-3-mini](https://export.arxiv.org/abs/2404.14219) and Apple released [OpenELM](https://huggingface.co/apple/OpenELM#llm360), two tiny but capable free-to-use language models that can run on a smartphone. Coming months will show whether Llama 3 and other open models really can displace premium AI models like GPT-4 for some developers. And even more powerful open source AI is coming. The company is working on a massive 400-billion-parameter version of Llama 3 that chief AI scientist [Yann LeCun](https://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview/) says should be one of the most capable in the world. Of course all this openness is not purely altruistic. Meta CEO Mark Zuckerberg says opening up its AI models [should ultimately benefit the company](https://twitter.com/i/bookmarks?post_id=1782469953054179692) by lowering the cost of technologies it relies on, for example by spawning compatible tools and services that Meta can use for itself. He left unsaid that it may also be to Meta’s benefit to prevent OpenAI, Microsoft, or Google from dominating the field.
-
After Meta’s stock [meltdown](https://www.fastcompany.com/91112579/meta-stock-plunges-2024-q1-earnings) late Wednesday afternoon and Thursday, the tech sector is roaring back in after-hours trading on the strength of strong earnings reports from Alphabet, Microsoft, and Snap. Snap shares were up more than 30% after its earnings release and Alphabet was up 13%. Microsoft shares, meanwhile, jumped 5.5%. There were numerous reasons for the surges. Alphabet announced its first-ever dividend (of 20 cents per share) and a $70 billion stock buyback. And Snap gave guidance into future quarters, saying it expects daily active users to jump by 9 million in the second quarter to 431 million, a higher number than analysts were expecting. But the backbone for the investor celebration? [Artificial intelligence](https://www.fastcompany.com/91029555/artificial-intelligence-most-innovative-companies-2024). Microsoft reported growth of 31% in its Azure unit and said 7% of that total was from AI. “Microsoft Copilot and Copilot stack are orchestrating a new era of AI transformation, driving better business outcomes across every role and industry,” said CEO Satya Nadella in the earnings release. Google’s parent company, meanwhile, saw cloud revenue of $9.6 billion and declared the “Gemini era” was underway. “There’s great momentum across the company,” said Sundar Pichai, Alphabet CEO, in a statement. “Our leadership in AI research and infrastructure, and our global product footprint, position us well for the next wave of AI innovation.” The post-market stock movements show that investor enthusiasm for AI isn’t slowing down, but _how_ companies talk about the technology can make a big difference. Meta, for instance, had a strong first quarter in terms of earnings and revenues, but Mark Zuckerberg’s focus on the conference call about all the ways the company was spending money appeared to spook investors, who sent shares of the company plunging. Meta shares lost 10.5% of their worth on Thursday—roughly a $100 billion loss in value. The three companies that reported earnings on Thursday will certainly be spending heavily on continuing AI research in the months and years ahead as well, but they were a bit less fatalistic in their communication with investors. Microsoft [noted](https://www.microsoft.com/en-us/investor/earnings/fy-2024-q3/press-release-webcast) that “capital expenditures including assets acquired under finance leases were $14 billion to support demand in our cloud and AI offerings,” justifying the spend with a few well-placed words about demand. That kept analysts happy. “Yes, investors should keep an eye on potential AI overspending,” Emarketer senior director of briefings Jeremy Goldman tells _Fast Company_ in a statement. “But for now, Satya Nadella’s forward-looking strategy is building value by infusing productive intelligence across Microsoft’s entire portfolio – from the cloud to the desktop. With AI weaving its way into every offering, Microsoft may just cement its stay as the enterprise’s most indispensable partner.” Snap, meanwhile, [said](https://s25.q4cdn.com/442043304/files/doc_financials/2024/q1/Q1-24-Press-Release_FINAL-4-25-24.pdf), “We continue to invest in Generative AI models and automation for the creation of ML and AI Lenses, which contributed to the number of ML and AI Lenses viewed by Snapchatters increasing by more than 50% year-over-year.” Again, the emphasis was on the results of the spending. Alphabet largely dodged the issue of AI spending, [saying](https://abc.xyz/assets/91/b3/3f9213d14ce3ae27e1038e01a0e0/2024q1-alphabet-earnings-release-pdf.pdf), “certain costs are not allocated to our segments because they represent Alphabet-level activities. These costs primarily include AI-focused shared R&D activities, including development costs of our general AI models.” But Alphabet also announced the dividend, so investors were going to go crazy about that no matter what. AI has been catnip to Wall Street for over a year now. And despite Meta’s blunt honesty in its earnings report, shares of that company are still more than twice the price of where they were a year ago. The next piece of the AI puzzle won’t come for another month or so, when [Nvidia reports earnings](https://www.fastcompany.com/91034272/nvidia-nvda-earnings-record-265-revenue-growth-moving-stock-market) on Wednesday, May 22. _ Recognize your brand’s excellence by applying to this year’s [Brands That Matter Awards](https://www.fastcompany.com/apply/brands-that-matter) before the early-rate deadline, May 3. _
2024-07-01
-
An anonymous reader quotes a report from Ars Technica: _Meta continues to hit walls with its heavily scrutinized plan to comply with the European Union's strict online competition law, the Digital Markets Act (DMA), by [offering Facebook and Instagram subscriptions](https://arstechnica.com/tech-policy/2024/07/metas-pay-for-privacy-plan-falls-afoul-of-the-law-eu-regulators-say/) as an alternative for privacy-inclined users who want to opt out of ad targeting. Today, the European Commission (EC) [announced](https://ec.europa.eu/commission/presscorner/detail/en/ip_24_3582) preliminary findings that Meta's so-called "pay or consent" or "pay or OK" model -- which gives users a choice to either pay for access to its platforms or give consent to collect user data to target ads -- is not compliant with the DMA. According to the EC, Meta's advertising model violates the DMA in two ways. First, it "does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the 'personalized ads-based service." And second, it "does not allow users to exercise their right to freely consent to the combination of their personal data," the press release said. Now, Meta will have a chance to review the EC's evidence and defend its policy, with today's findings kicking off a process that will take months. The EC's investigation is expected to conclude next March. Thierry Breton, the commissioner for the internal market, said in the press release that the preliminary findings represent "another important step" to ensure Meta's full compliance with the DMA. "The DMA is there to give back to the users the power to decide how their data is used and ensure innovative companies can compete on equal footing with tech giants on data access," Breton said. A Meta spokesperson told Ars that Meta plans to fight the findings -- which could trigger fines up to 10 percent of the company's worldwide turnover, as well as fines up to 20 percent for repeat infringement if Meta loses. The EC agreed that more talks were needed, writing in the press release, "the Commission continues its constructive engagement with Meta to identify a satisfactory path towards effective compliance." _Meta continues to claim that its "subscription for no ads" model was "endorsed" by the highest court in Europe, the Court of Justice of the European Union (CJEU), last year. "Subscription for no ads follows the direction of the highest court in Europe and complies with the DMA," Meta's spokesperson said. "We look forward to further constructive dialogue with the European Commission to bring this investigation to a close." Meta rolled out its ad-free subscription service option [last November](https://tech.slashdot.org/story/23/10/30/1229247/facebook-and-instagram-to-offer-subscription-for-no-ads-in-europe). "Depending on where you purchase it will cost $10.5/month on the web or $13.75/month on iOS and Android," said the company in a blog post. "Regardless of where you purchase, the subscription will apply to all linked Facebook and Instagram accounts in a user's Accounts Center. As is the case for many online subscriptions, the iOS and Android pricing take into account the fees that Apple and Google charge through respective purchasing policies."
2024-07-17
-
174524445 story [![EU](//a.fsdn.com/sd/topics/eu_64.png)](//meta.slashdot.org/index2.pl?fhfilter=eu)[![AI](//a.fsdn.com/sd/topics/ai_64.png) ](//meta.slashdot.org/index2.pl?fhfilter=ai)[![Facebook](//a.fsdn.com/sd/topics/facebook_64.png) ](//meta.slashdot.org/index2.pl?fhfilter=facebook)[![Slashdot.org](//a.fsdn.com/sd/topics/meta_64.png)](//meta.slashdot.org/index2.pl?fhfilter=meta) Posted by [BeauHD](https://www.linkedin.com/in/beauhd/) on Wednesday July 17, 2024 @06:02PM from the uncertain-future dept. According to Axios, Meta will [withhold future multimodel AI models from customers in the European Union](https://www.axios.com/2024/07/17/meta-future-multimodal-ai-models-eu) "due to the unpredictable nature of the European regulatory environment." From the report: _Meta plans to incorporate the new multimodal models, which are able to reason across video, audio, images and text, in a wide range of products, including smartphones and its Meta Ray-Ban smart glasses. Meta says its decision also means that European companies will not be able to use the multimodal models even though they are being released under an open license. It could also prevent companies outside of the EU from offering products and services in Europe that make use of the new multimodal models. The company is also planning to release a larger, text-only version of its Llama 3 model soon. That will be made available for customers and companies in the EU, Meta said. Meta's issue isn't with the still-being-finalized AI Act, but rather with how it can train models using data from European customers while complying with GDPR -- the EU's existing data protection law. Meta announced in May that it planned to use publicly available posts from Facebook and Instagram users to train future models. Meta said it sent more than 2 billion notifications to users in the EU, offering a means for opting out, with training set to begin in June. Meta says it briefed EU regulators months in advance of that public announcement and received only minimal feedback, which it says it addressed. In June -- after announcing its plans publicly -- Meta was ordered to pause the training on EU data. A couple weeks later it received dozens of questions from data privacy regulators from across the region. The United Kingdom has a nearly identical law to GDPR, but Meta says it isn't seeing the same level of regulatory uncertainty and plans to launch its new model for U.K. users. A Meta representative told Axios that European regulators are taking much longer to interpret existing law than their counterparts in other regions. A Meta representative told Axios that training on European data is key to ensuring its products properly reflect the terminology and culture of the region. _
2024-07-18
-
174531483 story [![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook)[![AI](//a.fsdn.com/sd/topics/ai_64.png)](//tech.slashdot.org/index2.pl?fhfilter=ai) Posted by msmash on Thursday July 18, 2024 @12:40PM from the making-a-statement dept. Meta says it [won't be launching its upcoming multimodal AI model](https://www.theverge.com/2024/7/18/24201041/meta-multimodal-llama-ai-model-launch-eu-regulations) -- capable of handling video, audio, images, and text -- in the European Union, citing regulatory concerns. From a report: _The decision will prevent European companies from using the multimodal model, despite it being released under an open license. Just last week, the EU finalized compliance deadlines for AI companies under its strict new AI Act. Tech companies operating in the EU will generally have until August 2026 to comply with rules around copyright, transparency, and AI uses like predictive policing. Meta's decision follows a similar move by Apple, which recently said it would likely exclude the EU from its Apple Intelligence rollout due to concerns surrounding the Digital Markets Act._
2024-07-19
-
Meta’s cost-cutting efforts at its metaverse division, Reality Labs, could help save the company $3 billion, Bank of America analysts said Friday. [Meta is reportedly cutting the budget for its Reality Labs](https://www.theinformation.com/articles/reality-comes-to-metas-reality-labs?rc=5xvgzc) hardware division, which makes its VR headsets, by about 20% between this year and 2026, The Information reported Thursday, citing unnamed sources. That doesn’t mean the company is halting its virtual and augmented reality innovations: the company is planning to release new Quest headsets and AR glasses in the next three years, the outlet said. The cost-cutting at Reality Labs is instead meant to put the division’s seemingly out of control spending under lock. While Bank of America’s Justin Post and Nitin Bansal said in a research note Friday that Meta could save an estimated $3 billion, they added that some of those cost savings could be reallocated to Meta’s AI efforts. But those efforts are also being put on hold in some regions (i.e. the European Union and Brazil) as [Meta looks to avoid growing regulatory scrutiny in the AI space](https://qz.com/meta-pause-generative-ai-brazil-multimodal-model-eu-1851599618). Meta’s plans for AI and virtual reality will likely come into clearer focus when the Facebook and Instagram parent reports its second quarter earnings on July 31. Meta CEO Mark Zuckerberg has repeatedly reiterated his belief that the capital-m Metaverse is the future. “We continue making steady progress building the metaverse as well,” he said in a call with investors in March, discussing [the company’s first quarter financial results](https://qz.com/meta-metaverse-facebook-earnings-mark-zuckerberg-1851433524). In the same breath, Meta reported a loss of $3.8 billion for its Reality Labs division. The company’s VR and AR efforts are surely still a money-loser for Meta, but Reality Labs is at least finding ways to shrink its losses — which fell 17% between the last three months of 2023 and the first quarter of 2024. Bank of America analysts maintained their buy rating of Meta’s stock on Friday. They see shares rising nearly 15% to $550 over the next year. By the numbers -------------- **$55 billion:** How much Meta’s Reality Labs has lost the company since 2019 **30%:** How much first quarter sales for Reality Labs, which totaled $440 million, rose from last year **$3 billion:** How much Meta could save with new cost-cutting measures at Reality Labs **14.8%:** How much Bank of America analysts see Meta’s share price rising over the next year — from $479 to $550
-
Meta quickly [shifted away from the metaverse](https://qz.com/meta-layoffs-2023-jobs-metaverse-ai-1850196575) to generative artificial intelligence, and now it’s pumping the brakes on some of its efforts amid regulatory scrutiny. On Wednesday, Meta said it was [pausing the use of its generative AI tools in Brazil](https://www.reuters.com/technology/artificial-intelligence/meta-decides-suspend-its-generative-ai-tools-brazil-2024-07-17/) due to opposition from the country’s government over the company’s privacy policy on personal data and AI, according to Reuters. Meta was [banned from training its AI models](https://www.gov.br/anpd/pt-br/assuntos/noticias/anpd-determina-suspensao-cautelar-do-tratamento-de-dados-pessoais-para-treinamento-da-ia-da-meta) on Brazilians’ personal data by the country’s National Data Protection Authority (ANPD) earlier this month. The Facebook-owner had updated its privacy policy in May to give itself [permission to train AI on public Facebook, Messenger, and Instagram data](https://about.fb.com/br/news/2024/05/como-a-meta-esta-desenvolvendo-a-inteligencia-artificial-para-o-brasil/) in Brazil. The [ANPD said Meta’s privacy policy](https://apnews.com/article/brazil-tech-meta-privacy-data-93e00b2e0e26f7cc98795dd052aea8e1) has “the imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of the affected data subjects,” according to the Associated Press. Meanwhile, Meta has decided to [not release its upcoming and future multimodal AI models](https://www.axios.com/2024/07/17/meta-future-multimodal-ai-models-eu) in the European Union “due to the unpredictable nature of the European regulatory environment,” the company said in a statement shared with Axios. The company’s decision follows Apple, which said in June it would [likely not roll out its new Apple Intelligence and other AI features](https://qz.com/apple-not-release-apple-intelligence-european-union-dma-1851553830) in the bloc due to the Digital Markets Act. Even though Meta’s multimodal models will be under an open license, companies in Europe will not be able to use them over the company’s decision, Axios reported. And companies outside of the bloc could reportedly be blocked from offering products and services on the continent that use Meta’s models. However, Meta has a larger, text-only version of its Llama 3 model that will be made available in the EU when it’s released, the company told Axios. In June, Meta said it would [delay training](https://about.fb.com/news/2024/06/building-ai-technology-for-europeans-in-a-transparent-and-responsible-way/) its [large language models](https://qz.com/ai-artificial-intelligence-glossary-vocabulary-terms-1851422473) on public data from Facebook and Instagram users in the European Union after facing pushback from the Irish Data Protection Commission (DPC). “This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe,” Meta said.
2024-07-23
-
Meta is taking on its artificial intelligence rivals with the [latest version of its Llama model](https://llama.meta.com/), saying open-source AI is “good for the world.” The open-source Llama 3.1 models released Tuesday, which Meta calls its [“most capable” to date](https://ai.meta.com/blog/meta-llama-3-1/), include its largest model, Llama 3.1 405B, which stands for 405 billion parameters, or the variables a model learns from training data that guide its behavior. Llama 3.1 405B rivals its closed-source competitors from OpenAI and Google in “state-of-the-art capabilities,” including general knowledge, math, and translating languages, Meta said. The release also includes [upgraded versions of its 8B and 70B models](https://qz.com/meta-ai-assistant-instagram-facebook-messenger-llama3-1851421803) which were introduced in April. Llama 3.1 405B was [evaluated on over 150 benchmark datasets and by humans](https://ai.meta.com/blog/meta-llama-3-1/) against other leading foundation models, including OpenAI’s GPT-4 and GPT-4o, and Anthropic’s Claude 3.5 Sonnet, which are closed-source models. While Llama 3.1 405B was outperformed on some benchmarks, the “experimental evaluation suggests that our flagship model is competitive” with the other leading models, Meta said. The model was trained with over 16,000 of Nvidia’s H100 GPUs, or graphics processing units, according to Meta. The chipmaker also announced a new [Nvidia AI Foundry service](https://nvidianews.nvidia.com/news/nvidia-ai-foundry-custom-llama-generative-models) for enterprises and nation states to build “supermodels” with Llama 3.1 405B. The Llama models are used to power Meta’s AI chatbot, Meta AI, which is available on Facebook, Instagram, and other platforms. Meta expanded access to Meta AI [in Latin America and other countries](https://about.fb.com/news/2024/07/meta-ai-is-now-multilingual-more-creative-and-smarter/) on Tuesday, and announced it is offering the chatbot in seven new languages, including German and Hindi. Users have the option to use the Llama 3.1 405B-powered Meta AI on WhatsApp and meta.ai. Meta chief executive Mark Zuckerberg said the company expects “future Llama models to become the most advanced in the industry,” and that Llama 3.1 405B is a step toward making open-source models the industry standard. “AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research,” Zuckerberg said in [a statement](https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/) released Tuesday. “Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn’t concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society.” Zuckerberg, who told Bloomberg the company is [working on Llama 4](https://www.bloomberg.com/news/articles/2024-07-23/meta-s-zuckerberg-aims-to-rival-openai-google-with-new-llama-ai-model?srnd=phx-technology&sref=P6Q0mxvj), said he believes “open-source AI will be safer than the alternatives,” and that the company’s new model “will be an inflection point in the industry where most developers begin to primarily use open source.”
-
Jul 23, 2024 11:05 AM The newest version of Llama will make AI more accessible and customizable, but it will also stir up debate over the dangers of releasing AI without guardrails. ![Photo of Meta CEO Mark Zuckerberg delivering a speech.](https://media.wired.com/photos/669ec6470d8bbfc56a6384e8/master/w_2560%2Cc_limit/Meta%2520Launches%2520Llama%25203_h_27.RTSO8SJ0.jpg) Photograph: Carlos Barria/Reuters/Redux Most tech moguls hope to sell [artificial intelligence](https://www.wired.com/tag/artificial-intelligence/) to the masses. But Mark Zuckerberg is giving away what Meta considers to be one of the world’s best AI models for free. Meta released the biggest, most capable version of a large language model called [Llama](https://www.wired.com/story/metas-open-source-llama-3-nipping-at-openais-heels/) on Monday, free of charge. Meta has not disclosed the cost of developing Llama 3.1, but Zuckerberg [recently told investors](https://investor.fb.com/investor-news/press-release-details/2024/Meta-Reports-First-Quarter-2024-Results/default.aspx) that his company is spending billions on AI development. Through this latest release, Meta is showing that the closed approach favored by most AI companies is not the only way to develop AI. But the company is also putting itself at the center of debate over the dangers posed by releasing AI without controls. Meta trains Llama in a way that prevents the model from producing harmful output by default, but the model can be modified to remove such safeguards. Meta says that Llama 3.1 is as clever and useful as the best commercial offerings from companies like [OpenAI](https://www.wired.com/tag/openai/), [Google](https://www.wired.com/tag/google/), and [Anthropic](https://www.wired.com/story/anthropic-black-box-ai-research-neurons-features/). In certain benchmarks that measure progress in AI, Meta says the model is the smartest AI on Earth. “It’s very exciting,” says [Percy Liang](https://cs.stanford.edu/~pliang/), an associate professor at Stanford University who tracks open source AI. If developers find the new model to be just as capable as the industry’s leading ones, including [OpenAI’s GPT-4o](https://www.wired.com/story/openai-gpt-4o-model-gives-chatgpt-a-snappy-flirty-upgrade/), Liang says, it could see many move over to Meta’s offering. “It will be interesting to see how the usage shifts,” he says. In an [open letter](https://about.fb.com/news/2024/07/open-source-ai-is-the-path-forward/) posted with the release of the new model, Meta CEO Zuckerberg compared Llama to the open source [Linux](https://www.wired.com/tag/linux/) operating system. When Linux took off in the late '90s and early 2000s many big tech companies were invested in closed alternatives and criticized open source software as risky and unreliable. Today however Linux is widely used in cloud computing and serves as the core of the Android mobile OS. “I believe that AI will develop in a similar way,” Zuckerberg writes in his letter. “Today, several tech companies are developing leading closed models. But open source is quickly closing the gap.” However, Meta’s decision to give away its AI is not devoid of self-interest. Previous [Llama releases](https://www.wired.com/story/metas-open-source-llama-upsets-the-ai-horse-race/) have helped the company secure an influential position among AI researchers, developers, and startups. Liang also notes that Llama 3.1 is not truly open source, because Meta imposes restrictions on its usage—for example, limiting the scale at which the model can be used in commercial products. The new version of Llama has 405 billion parameters or tweakable elements. Meta has already released two smaller versions of Llama 3, one with 70 billion parameters and another with 8 billion. Meta today also released upgraded versions of these models branded as Llama 3.1. Llama 3.1 is too big to be run on a regular computer, but Meta says that many cloud providers, including Databricks, Groq, AWS, and Google Cloud, will offer hosting options to allow developers to run custom versions of the model. The model can also be accessed at [Meta.ai](https://www.meta.ai/). Some developers say the new Llama release could have broad implications for AI development. [Stella Biderman](https://www.stellabiderman.com/), executive director of [EleutherAI](https://www.eleuther.ai/), an open source AI project, also notes that Llama 3 is not fully open source. But Biderman notes that a change to Meta’s latest license will let developers train their own models using Llama 3, something that most AI companies currently prohibit. “This is a really, really big deal,” Biderman says. Unlike OpenAI and Google’s latest models, Llama is not “multimodal,” meaning it is not built to handle images, audio, and video. But Meta says the model is significantly better at using other software such as a web browser, something that many researchers and companies [believe could make AI more useful](https://www.wired.com/story/fast-forward-forget-chatbots-ai-agents-are-the-future/). After OpenAI released ChatGPT in late 2022, [some AI experts called for a moratorium](https://www.wired.com/story/chatgpt-pause-ai-experiments-open-letter/) on AI development for fear that the technology could be misused or too powerful to control. Existential alarm has since cooled, but many experts remain concerned that unrestricted AI models could be misused by hackers or used to speed up the development of biological or chemical weapons. "Cyber criminals everywhere will be delighted,” says Geoffrey Hinton, a Turing award winner whose pioneering work on a field of machine learning known as deep learning laid the groundwork for large language models. Hinton joined Google in 2013 but [left the company last year](https://www.wired.com/story/geoffrey-hinton-ai-chatgpt-dangers/) to speak out about the possible risks that might come with more advanced AI models. He says that AI is fundamentally different from open source software because models cannot be scrutinized in the same way. “People fine-tune models for their own purposes, and some of those purposes are very bad," he adds. Meta has helped allay some fears by releasing previous versions of Llama carefully. The company says it puts Llama through rigorous safety testing before release, and adds that there is little evidence that its models make it easier to develop weapons. Meta said it will release several new tools to help developers keep Llama models safe by moderating their output and blocking attempts to break restrictions. Jon Carvill, a spokesman for Meta, says the company will decide on a case-by-case basis whether to release future models. Dan Hendrycks, a computer scientist and the director of the [Center for AI Safety](https://www.safe.ai/), a nonprofit organization focused on AI dangers, says Meta has generally done a good job of testing its models before releasing them. He says that the new model could help experts understand future risks. “Today’s Llama 3 release will enable researchers outside big tech companies to conduct much-needed AI safety research.”
-
Meta has [released Llama 3.1](https://llama.meta.com/), its largest open-source AI model to date, in a move that challenges the closed approaches of competitors like OpenAI and Google. The new model, [boasting 405 billion parameters](https://ai.meta.com/blog/meta-llama-3-1/), is claimed by Meta to outperform GPT-4o and Claude 3.5 Sonnet on several benchmarks, with CEO Mark Zuckerberg predicting that Meta AI will become the most widely used assistant by year-end. Llama 3.1, which Meta says was trained using over 16,000 Nvidia H100 GPUs, is being made available to developers through partnerships with major tech companies including Microsoft, Amazon, and Google, potentially reducing deployment costs compared to proprietary alternatives. The release includes smaller versions with 70 billion and 8 billion parameters, and Meta is introducing new safety tools to help developers moderate the model's output. While Meta isn't disclosing what all data it used to train its models, the company confirmed it used synthetic data to enhance the model's capabilities. The company is also expanding its Meta AI assistant, powered by Llama 3.1, to support additional languages and integrate with its various platforms, including WhatsApp, Instagram, and Facebook, as well as its Quest virtual reality headset.
2024-08-23
-
174819736 story [![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook)[![Technology](//a.fsdn.com/sd/topics/technology_64.png)](//tech.slashdot.org/index2.pl?fhfilter=technology) Posted by msmash on Friday August 23, 2024 @12:02PM from the tough-luck dept. Meta Platforms has [canceled plans for a premium mixed-reality headset](https://www.theinformation.com/articles/meta-cancels-high-end-mixed-reality-headset) intended to compete with Apple's Vision Pro, _The Information_ reported Friday, citing sources. From the report: _Meta told employees at the company's Reality Labs division to stop work on the device this week after a product review meeting attended by Meta CEO Mark Zuckerberg, Chief Technology Officer Andrew Bosworth and other Meta executives, the employees said. The axed device, which was internally code-named La Jolla, began development in November and was scheduled for release in 2027, according to current and former Meta employees. It was going to contain ultrahigh-resolution screens known as micro OLEDs -- the same display technology used in Apple's Vision Pro._
2024-09-17
-
[Meta](https://www.fastcompany.com/91190917/restarts-plans-to-train-ai-with-uk-user-data-facebook-instagram-content-social-media-activity) said it’s banning the Russia state media organization from its social media platforms, alleging that the outlets used deceptive tactics to amplify Moscow’s propaganda. The announcement drew a rebuke from the Kremlin on Tuesday. The company, which owns Facebook, WhatsApp, and Instagram, said late Monday that it will roll out the ban over the next few days in an escalation of its [efforts to counter Russia’s covert influence operations](https://www.fastcompany.com/90725896/meta-will-expand-lock-your-profile-protections-to-russian-facebook-users). “After careful consideration, we expanded our ongoing enforcement against Russian state media outlets: Rossiya Segodnya, RT, and other related entities are now banned from our apps globally for foreign interference activity,” Meta said in a prepared statement. Kremlin spokesman Dmitry Peskov lashed out, saying that “such selective actions against Russian media are unacceptable,” and that “Meta with these actions are discrediting themselves. “We have an extremely negative attitude towards this. And this, of course, complicates the prospects for normalizing our relations with Meta,” Peskov told reporters during his daily conference call. RT, formerly known as Russia Today, and Russia Segodnya, also denounced the move. “It’s cute how there’s a competition in the West—who can try to spank RT the hardest, in order to make themselves look better,” RT said in a release. Expand to continue reading ↓
2024-09-25
-
175131917 story [![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook)[![Technology](//a.fsdn.com/sd/topics/technology_64.png)](//tech.slashdot.org/index2.pl?fhfilter=technology) Posted by msmash on Wednesday September 25, 2024 @02:02PM from the pushing-the-limits dept. Meta [unveiled prototype AR glasses codenamed Orion](https://www.theverge.com/24253908/meta-orion-ar-glasses-demo-mark-zuckerberg-interview) on Wednesday, featuring a 70-degree field of view, Micro LED projectors, and silicon carbide lenses that beam graphics directly into the wearer's eyes. In an interview with The Verge, CEO Mark Zuckerberg demonstrated the device's capabilities, including ingredient recognition, holographic gaming, and video calling, controlled by a neural wristband that interprets hand gestures through electromyography. Despite technological advances, Meta has shelved Orion's commercial release, citing manufacturing complexities and costs reaching $10,000 per unit, primarily due to difficulties in producing the silicon carbide lenses. The company now aims to launch a refined, more affordable version in coming years, with executives hinting at a price comparable to high-end smartphones and laptops. Zuckerberg views AR glasses as critical to Meta's future, potentially freeing the company from its reliance on smartphone platforms controlled by Apple and Google. The push into AR hardware comes as tech giants and startups intensify competition in the space, with Apple launching Vision Pro and Google partnering with Magic Leap and Samsung on headset development.
2024-10-04
-
Oct 4, 2024 9:00 AM The next frontier in generative AI is video—and with Movie Gen, Meta has now staked its claim. An AI-generated video made from the prompt "A baby hippo swimming in the river. Colorful flowers float at the surface, as fish swim around the hippo. The hippo's skin is smooth and shiny, reflecting the sunlight that filters through the water."Courtesy of Meta Meta just announced its own media-focused [AI model](https://www.wired.com/tag/artificial-intelligence), called Movie Gen, that can be used to generate realistic video and audioclips. The company shared multiple 10-second clips generated with [Movie Gen](https://ai.meta.com/blog/movie-gen-media-foundation-models-generative-ai-video/), including a Moo Deng-esque baby hippo swimming around, to demonstrate its capabilities. While the tool is not yet available for use, this Movie Gen announcement comes shortly after its Meta Connect event, which showcased new and [refreshed hardware](https://www.wired.com/story/meta-quest-3s-headset/) and the latest version of its [large language model, Llama 3.2](https://www.wired.com/story/meta-releases-new-llama-model-ai-voice/). Going beyond the generation of straightforward [text-to-video](https://www.wired.com/story/text-to-video-ai-generators-filmmaking-hollywood/) clips, the Movie Gen model can make targeted edits to an existing clip, like adding an object into someone’s hands or changing the appearance of a surface. In one of the example videos from Meta, a woman wearing a VR headset was transformed to look like she was wearing steampunk binoculars. An AI-generated video made from the prompt "make me a painter." Courtesy of Meta An AI-generated video made from the prompt "a woman DJ spins records. She is wearing a pink jacket and giant headphones. There is a cheetah next to the woman." Courtesy of Meta Audio bites can be generated alongside the videos with Movie Gen. In the sample clips, an AI man stands near a waterfall with audible splashes and the hopeful sounds of a symphony; the engine of a sports car purrs and tires screech as it zips around the track, and a snake slides along the jungle floor, accompanied by suspenseful horns. Meta shared some further details about Movie Gen in a research paper released Friday. Movie Gen Video consists of 30 billion parameters, while Movie Gen Audio consists of 13 billion parameters. (A model's parameter count roughly corresponds to how capable it is; by contrast, the largest variant of [Llama 3.1 has 405 billion parameters](https://www.wired.com/story/meta-ai-llama-3/).) Movie Gen can produce high-definition videos up to 16 seconds long, and Meta claims that it outperforms competitive models in overall video quality. Earlier this year, CEO Mark Zuckerberg demonstrated Meta AI’s Imagine Me feature, where users can upload a photo of themselves and role-play their face into multiple scenarios, by posting an AI image of himself [drowning in gold chains](https://www.threads.net/@zuck/post/C9xxwZZyx5B?xmt=AQGzXnHzmnMqrWb6E16MB7-sBjd7WYocg9yooqdOatxWQg) on Threads. A video version of a similar feature is possible with the Movie Gen model—think of it as a kind of [ElfYourself](https://www.wired.com/2007/12/geekdad-mashup/) on steroids. What information has Movie Gen been trained on? The specifics aren’t clear in Meta’s announcement post: “We’ve trained these models on a combination of licensed and publicly available data sets.” The [sources of training data](https://www.wired.com/story/youtube-training-data-apple-nvidia-anthropic/) and [what’s fair to scrape from the web](https://www.wired.com/story/perplexity-is-a-bullshit-machine/) remain a contentious issue for generative AI tools, and it's rarely ever public knowledge what text, video, or audioclips were used to create any of the major models. It will be interesting to see how long it takes Meta to make Movie Gen broadly available. The announcement blog vaguely gestures at a “potential future release.” For comparison, OpenAI announced its [AI video model, called Sora](https://www.wired.com/story/openai-sora-generative-ai-video/), earlier this year and has not yet made it available to the public or shared any upcoming release date (though WIRED did receive a few exclusive Sora clips from the company for an [investigation into bias](https://www.wired.com/story/artificial-intelligence-lgbtq-representation-openai-sora/)). Considering Meta’s legacy as a social media company, it’s possible that tools powered by Movie Gen will start popping up, eventually, inside of Facebook, Instagram, and WhatsApp. In September, competitor Google shared plans to make aspects of its Veo video model [available to creators](https://www.wired.com/story/generative-ai-tools-youtube-shorts-veo/) inside its YouTube Shorts sometime next year. While larger tech companies are still holding off on fully releasing video models to the public, you are able to experiment with AI video tools right now from smaller, upcoming startups, like [Runway](https://runwayml.com/) and [Pika](https://pika.art/home). Give Pikaffects a whirl if you’ve ever been curious what it would be like to see yourself [cartoonishly crushed](https://www.threads.net/@crumbler/post/DAokPKetoMh?xmt=AQGzNNS-5u820OA0WpsHTIxnoDiVH50L_OwMbOEw2V9DLA) with a hydraulic press or suddenly melt in a puddle.
-
Do you develop on GitHub? You can keep using GitHub but automatically [**sync your GitHub releases to SourceForge**](https://sourceforge.net/p/forge/documentation/GitHub%20Importer/) quickly and easily with **[this tool](https://sourceforge.net/p/import_project/github/)** so your projects have a backup location, and get your project in front of SourceForge's nearly 20 million monthly users. It takes less than a minute. Get new users downloading your project releases today! × 175192559 story [![AI](//a.fsdn.com/sd/topics/ai_64.png)](//meta.slashdot.org/index2.pl?fhfilter=ai)[![Movies](//a.fsdn.com/sd/topics/movies_64.png) ](//meta.slashdot.org/index2.pl?fhfilter=movies)[![Slashdot.org](//a.fsdn.com/sd/topics/meta_64.png)](//meta.slashdot.org/index2.pl?fhfilter=meta) Posted by [BeauHD](https://www.linkedin.com/in/beauhd/) on Friday October 04, 2024 @05:20PM from the RIP-movie-studios dept. An anonymous reader quotes a report from Ars Technica: _On Friday, Meta announced a preview of [Movie Gen](https://ai.meta.com/research/movie-gen/), a new suite of AI models designed to create and manipulate video, audio, and images, including [creating a realistic video from a single photo of a person](https://arstechnica.com/ai/2024/10/metas-new-movie-gen-ai-system-can-deepfake-video-from-a-single-photo/). The company claims the models outperform other video-synthesis models when evaluated by humans, pushing us closer to a future where anyone can synthesize a full video of any subject on demand. The company does not yet have plans of when or how it will release these capabilities to the public, but Meta says Movie Gen is a tool that may allow people to "enhance their inherent creativity" rather than replace human artists and animators. The company envisions future applications such as easily creating and editing "day in the life" videos for social media platforms or generating personalized animated birthday greetings. Movie Gen builds on Meta's previous work in video synthesis, following 2022's Make-A-Scene video generator and the Emu image-synthesis model. Using text prompts for guidance, this latest system can generate custom videos with sounds for the first time, edit and insert changes into existing videos, and transform images of people into realistic personalized videos. \[...\] Movie Gen's video-generation model can create 1080p high-definition videos up to 16 seconds long at 16 frames per second from text descriptions or an image input. Meta claims the model can handle complex concepts like object motion, subject-object interactions, and camera movements. _You can view example videos [here](https://ai.meta.com/research/movie-gen/). Meta also released a [research paper](https://ai.meta.com/static-resource/movie-gen-research-paper) with more technical information about the model. As for the training data, the company says it trained these models on a combination of "licensed and publicly available datasets." Ars notes that this "very likely includes videos uploaded by Facebook and Instagram users over the years, although this is speculation based on Meta's current policies and previous behavior."
2024-10-21
-
Facebook owner [Meta](https://www.fastcompany.com/91211773/meta-platforms-2024-layoffs-reality-labs-instagram-whatsapp-year-of-efficiency) said on Friday it was releasing a batch of new [AI](https://www.fastcompany.com/91206477/meta-ai-chatbot-brazil-uk-chatgpt) models from its research division, including a “Self-Taught Evaluator” that may offer a path toward less human involvement in the AI development process. The release follows Meta’s introduction of the tool in an August paper, which detailed how it relies upon the same “chain of thought” technique used by OpenAI’s recently released o1 models to get it to make reliable judgments about models’ responses. That technique involves breaking down complex problems into smaller logical steps and appears to improve the accuracy of responses on challenging problems in subjects like science, coding and math. Meta’s researchers used entirely AI-generated data to train the evaluator model, eliminating human input at that stage as well. The ability to use AI to evaluate AI reliably offers a glimpse at a possible pathway toward building autonomous AI agents that can learn from their own mistakes, two of the Meta researchers behind the project told Reuters. Many in the AI field envision such agents as digital assistants intelligent enough to carry out a vast array of tasks without human intervention. Self-improving models could cut out the need for an often expensive and inefficient process used today called Reinforcement Learning from Human Feedback, which requires input from human annotators who must have specialized expertise to label data accurately and verify that answers to complex math and writing queries are correct. Expand to continue reading ↓
2024-11-01
-
ChatGPT takes on Google, Meta's spending spree, and Microsoft's data center problem: AI news roundup![Image for article titled ChatGPT takes on Google, Meta's spending spree, and Microsoft's data center problem: AI news roundup](https://i.kinja-img.com/image/upload/c_fit,q_60,w_645/4a4299d65d73ae12ab5fe19d56ebe82e.jpg) Michael Hunter, an Atlanta-based real estate marketing professional and Apple ([AAPL](https://qz.com/quote/AAPL)) power user, has watched Apple’s new Apple Intelligence features evolve from promising to problematic. After a month with iOS 18.1's early release through his developer account, Hunter was impressed by the system’s enhanced Siri capabilities and responsiveness. [Read More](https://qz.com/apple-intelligence-ai-iphone-beta-features-siri-users-1851684957)
2024-11-14
-
European Union regulators hit Facebook parent Meta with a fine of nearly 800 million euros on Thursday for what it calls “abusive practices” involving its Marketplace online classified ads business LONDON -- European Union regulators issued their first antitrust fine to Facebook parent Meta on Thursday with a penalty of nearly 800 million euros for what they call “abusive practices” involving its Marketplace online classified ads business. The European Commission, the 27-nation bloc's executive branch and top antitrust enforcer, issued the 797.72 million euro ($841 million) penalty after its [long-running investigation](https://apnews.com/article/europe-technology-business-a44f2f093471ffa7ea8ff8e23f8282da) found that the company abused its dominant position and engaged in anti-competitive behavior. It’s the first time the EU has imposed a fine on the social media giant for breaches of the bloc’s competition law. Brussels has already slapped Big Tech rivals [Google](https://apnews.com/article/google-european-union-antitrust-shopping-court-a281e4e4722efa816e929a52a9939d86) and [Apple](https://apnews.com/article/apple-antitrust-fine-music-streaming-europe-439e3e8af91d844dee3dc8ff8012c68f) with billions in antitrust penalties. The commission had [accused Meta](https://apnews.com/article/technology-europe-business-european-union-f688beadd49ab55e326e163960675f19) of distorting competition by tying its online classified ad business to its social network, automatically exposing Facebook users to Marketplace “whether they want it or not" and shutting out competitors. It was also concerned that Meta was imposing unfair trading conditions with terms of service that authorized the company to use ad-related data — generated from competing classified ad platforms who advertise on Facebook or Instagram — to benefit Marketplace. Meta's practices gave it “advantages that other online classified ads service providers could not match,” Margrethe Vestager, the commission's executive vice-president in charge of competition policy, said in a press release “This is illegal under EU antitrust rules. Meta must now stop this behaviour.” Meta said in a statement that the decision fails to prove any “competitive harm” to rivals or consumers and “ignores the realities of the thriving European market for online classified listing services.” The company said the Commission's case ignores the fact that Facebook users can choose to ”engage with Marketplace, and many don't." It said online marketplaces, including global sites like eBay, Europe-wide platforms like Vinted, and national services are continuing to grow. Meta said it would comply with the Commission's order to end the offending conduct and not repeat it, but also vowed to appeal. The case dates back to 2021, when European Union regulators and their counterparts in Britain opened dual investigations into the classified business. The British regulator [wrapped up its investigation](https://apnews.com/article/amazon-meta-britain-antitrust-a63bf08544e67bd3e5eda5c4077f37c8) last year after Meta made concessions. The company continues to face EU scrutiny on other fronts, including investigations into whether Facebook and Instagram [child safety](https://apnews.com/article/facebook-instagram-meta-european-union-digital-services-act-61653e20757e75671092fb746e41ed4b) and [election integrity](https://apnews.com/article/meta-facebook-instagram-1fea720aeb5def876a6d415ed6136463) measures comply with the bloc’s digital rulebook. Meta has previously been hit with a series of [fines](https://apnews.com/article/meta-facebook-european-union-privacy-e40ab7bfa674b91bffb2813dce9b04d1) for breaches of the EU’s stringent privacy laws, including a [record 1.2 billion euro penalty](https://apnews.com/article/meta-facebook-data-privacy-fine-europe-9aa912200226c3d53aa293dca8968f84) last year.
-
European Union regulators hit Facebook parent Meta with a fine of nearly 800 million euros on Thursday for what it calls “abusive practices” involving its Marketplace online classified ads business LONDON -- European Union regulators issued their first antitrust fine to Facebook parent Meta on Thursday with a penalty of nearly 800 million euros for what they call “abusive practices” involving its Marketplace online classified ads business. The European Commission, the 27-nation bloc's executive branch and top antitrust enforcer, issued the 797.72 million euro ($841 million) penalty after its [long-running investigation](https://apnews.com/article/europe-technology-business-a44f2f093471ffa7ea8ff8e23f8282da) found that the company abused its dominant position and engaged in anti-competitive behavior. It’s the first time the EU has imposed a fine on the social media giant for breaches of the bloc’s competition law. Brussels has already slapped Big Tech rivals [Google](https://apnews.com/article/google-european-union-antitrust-shopping-court-a281e4e4722efa816e929a52a9939d86) and [Apple](https://apnews.com/article/apple-antitrust-fine-music-streaming-europe-439e3e8af91d844dee3dc8ff8012c68f) with billions in antitrust penalties. The commission had [accused Meta](https://apnews.com/article/technology-europe-business-european-union-f688beadd49ab55e326e163960675f19) of distorting competition by tying its online classified ad business to its social network, automatically exposing Facebook users to Marketplace “whether they want it or not" and shutting out competitors. It was also concerned that Meta was imposing unfair trading conditions with terms of service that authorized the company to use ad-related data — generated from competing classified ad platforms who advertise on Facebook or Instagram — to benefit Marketplace. Meta's practices gave it “advantages that other online classified ads service providers could not match,” Margrethe Vestager, the commission's executive vice-president in charge of competition policy, said in a press release “This is illegal under EU antitrust rules. Meta must now stop this behaviour.” Meta said in a statement that the decision fails to prove any “competitive harm” to rivals or consumers and “ignores the realities of the thriving European market for online classified listing services.” The company said the Commission's case ignores the fact that Facebook users can choose to ”engage with Marketplace, and many don't." It said online marketplaces, including global sites like eBay, Europe-wide platforms like Vinted, and national services are continuing to grow. Meta said it would comply with the Commission's order to end the offending conduct and not repeat it, but also vowed to appeal. The case dates back to 2021, when European Union regulators and their counterparts in Britain opened dual investigations into the classified business. The British regulator [wrapped up its investigation](https://apnews.com/article/amazon-meta-britain-antitrust-a63bf08544e67bd3e5eda5c4077f37c8) last year after Meta made concessions. The company continues to face EU scrutiny on other fronts, including investigations into whether Facebook and Instagram [child safety](https://apnews.com/article/facebook-instagram-meta-european-union-digital-services-act-61653e20757e75671092fb746e41ed4b) and [election integrity](https://apnews.com/article/meta-facebook-instagram-1fea720aeb5def876a6d415ed6136463) measures comply with the bloc’s digital rulebook. Meta has previously been hit with a series of [fines](https://apnews.com/article/meta-facebook-european-union-privacy-e40ab7bfa674b91bffb2813dce9b04d1) for breaches of the EU’s stringent privacy laws, including a [record 1.2 billion euro penalty](https://apnews.com/article/meta-facebook-data-privacy-fine-europe-9aa912200226c3d53aa293dca8968f84) last year.
2025-01-15
-
Unlock seamless, secure login experiences with [**Auth0**](https://auth0.com/signup?utm_source=sourceforge&utm_campaign=global_mult_mult_all_ciam-dev_dg-plg_auth0_display_sourceforge_banner_3p_PLG-SFSiteSearchBanner_utm2&utm_medium=cpc&utm_id=aNK4z000000UIV7GAO)—where authentication meets innovation. Scale your business confidently with flexible, developer-friendly tools built to protect your users and data. [**Try for FREE here**](https://auth0.com/signup?utm_source=sourceforge&utm_campaign=global_mult_mult_all_ciam-dev_dg-plg_auth0_display_sourceforge_banner_3p_PLG-SFSiteSearchBanner_utm2&utm_medium=cpc&utm_id=aNK4z000000UIV7GAO) × 175920375 story [![AI](//a.fsdn.com/sd/topics/ai_64.png)](//tech.slashdot.org/index2.pl?fhfilter=ai)[![Facebook](//a.fsdn.com/sd/topics/facebook_64.png)](//tech.slashdot.org/index2.pl?fhfilter=facebook) Posted by msmash on Wednesday January 15, 2025 @01:01PM from the how-about-that dept. Executives and researchers leading Meta's AI efforts [obsessed over beating OpenAI's GPT-4 model](https://techcrunch.com/2025/01/14/meta-execs-obsessed-over-beating-openais-gpt-4-internally-court-filings-reveal/) while developing Llama 3, according to internal messages unsealed by a court in one of the company's ongoing AI copyright cases, Kadrey v. Meta. From a report: _"Honestly... Our goal needs to be GPT-4," said Meta's VP of Generative AI, Ahmad Al-Dahle, in an October 2023 message to Meta researcher Hugo Touvron. "We have 64k GPUs coming! We need to learn how to build frontier and win this race." Though Meta releases open AI models, the company's AI leaders were far more focused on beating competitors that don't typically release their model's weights, like Anthropic and OpenAI, and instead gate them behind an API. Meta's execs and researchers held up Anthropic's Claude and OpenAI's GPT-4 as a gold standard to work toward. The French AI startup Mistral, one of the biggest open competitors to Meta, was mentioned several times in the internal messages, but the tone was dismissive. "Mistral is peanuts for us," Al-Dahle said in a message. "We should be able to do better," he said later._