AI Pulse News Roundup (March 2025 Edition)

Welcome to the March 2025 AI Pulse News Roundup Thread!

This is your space to:

:bulb: Post breaking news about AI research, applications, policies, product launches, ethical debates, and more.
:speech_balloon: Join the conversation by asking questions, sharing insights, and debating the implications with fellow members.
:books: Review the highlights as this thread becomes a snapshot of January’s key moments in AI.

Whether it’s a groundbreaking paper, a policy shift, or an exciting new tool, everything AI is welcome. Let’s keep the momentum going and make this month another great one for AI discussions.

Have a story or topic to share? Drop it below and let’s get started! :rocket:

Happy New Year, and here’s to an exciting March in AI! :point_down:

7 Likes
  • Pika Labs 2.2 Update: Pika Labs has launched version 2.2, offering enhanced quality, 10-second 1080p video generations, and new capabilities for transitions and transformations. Source

  • Meta’s AI Expansion: Meta is reportedly developing a standalone Meta AI app, set for a Q2 release, with potential paid subscription options similar to OpenAI’s approach. Source

  • Figure’s Humanoid Robot Push: Figure is accelerating its timeline to introduce humanoid robots into homes, launching Alpha testing this year with advancements from its new Helix AI. Source

  • Microsoft’s Copilot Enhancements: Microsoft has updated Copilot with a dedicated macOS app, support for PDF and text file uploads, and an improved user interface. Source

  • Meta’s Aria Gen 2 Glasses: Meta has introduced Aria Gen 2 smart glasses, featuring advanced sensors, on-device AI processing, and an all-day battery to support research in machine perception, contextual AI, and robotics. Source

  • You Labs Unveils ARI: You Labs introduced ARI, an AI research agent capable of analyzing up to 400 sources and generating detailed reports with charts, citations, and visuals in under five minutes. Source

6 Likes

Cool. Now, when do I get telekinesis?

DeepSeek’s AI Models Claim 545% Profit Margin

Chinese AI startup DeepSeek has disclosed that its AI models, V3 and R1, could theoretically achieve a 545% profit margin on inference costs if all users were on paid plans. Currently, many users access these models for free, leading to daily operational costs of approximately $87,072 for Nvidia chips. If billed at R1’s pricing, daily revenue could reach $562,027, projecting annual revenues exceeding $200 million. However, these figures are based on the assumption of universal paid usage, a scenario not yet realized by competitors. ​businessinsider.com

Anthropic CEO Predicts AI Surpassing Human Coders by 2026

Dario Amodei, CEO of AI startup Anthropic, has predicted that superintelligent AI, capable of outperforming Nobel prize-winners in most fields, could emerge as soon as next year. Anthropic aims to create AI that will transform society by automating human tasks, akin to the impact of the industrial revolution. The company’s chatbot, Claude, contributes to its rapid growth and substantial funding from major firms such as Amazon and Google. Despite emphasizing AI safety, Anthropic’s pursuit of AI advancements poses potential risks and societal disruptions. Amodei envisions a future where AI addresses complex biological problems, potentially leading to medical breakthroughs and extended human lifespans. ​thetimes.co.uk

SoftBank Seeks $16 Billion Loan for AI Investments Amid Criticism

SoftBank Group is in talks to borrow $16 billion to invest in artificial intelligence, with an additional $8 billion loan possible in early 2026. In January, SoftBank expressed intentions to invest up to $25 billion in OpenAI, the company behind ChatGPT, expanding its AI sector presence. This potential investment would complement the $15 billion already committed to Stargate, a joint venture involving Oracle, OpenAI, and SoftBank, aiming to maintain U.S. leadership in AI against global competitors. Elon Musk has commented that SoftBank CEO Masayoshi Son is “already over-leveraged.” ​reuters.com2

Anthropic Participates in Department of Energy’s ‘1,000 Scientist AI Jam’

Anthropic has partnered with U.S. National Labs for the inaugural “1,000 Scientist AI Jam,” an event bringing together over 1,000 scientists to utilize AI in accelerating scientific discovery. Researchers will employ advanced AI models, including Anthropic’s Claude, to tackle challenges in their respective scientific domains, evaluate model responses, and provide feedback to enhance future AI systems. This collaboration aims to improve Claude’s ability to serve the scientific community and bolster the nation’s competitive edge in AI. ​openai.com / anthropic.com

Samsung Launches $300 Galaxy A Series Phones with AI Features

Samsung has introduced its new Galaxy A series phones, starting at $299.99 for the Galaxy A26. The lineup includes 6.7-inch 5G handsets featuring AI enhancements in photo editing and a “Circle to Search” function. These models incorporate elements of Samsung’s flagship software, with the $499.99 Galaxy A56 also offering improvements for night photography and a “Best Face” feature to optimize group photos. ​businesstimes.com. / deccanchronicle.com

Honor Announces $10 Billion AI Investment Plan

Chinese smartphone maker Honor plans to invest $10 billion over the next five years to develop AI for its devices. The company aims to expand beyond smartphones into AI-powered PCs, tablets, and wearables. This strategic move is part of Honor’s Alpha Plan, a strategic initiative aimed at transforming Honor into a top AI-powered device company. ​reuters.com / ainvest.com

4 Likes

Interesting statement from DeepSeek. I dont know exact number of free/paid users on OpenAI, but here for example it is stated that there are 10 million paid subscribers (out of 400 million active monthly users). So if we assume all users are at least on Plus subscription, that would be 400m * 20USD * 12 = 96 billion USD ARR. Just saying.

2 Likes
  • Digg’s AI-Powered Comeback – Kevin Rose and Alexis Ohanian are reviving Digg with AI-enhanced moderation and user experience, aiming to compete in the social media space. The revamped platform seeks to blend Reddit-style community engagement with modern AI-driven content curation. (Twitter)

  • GPT-4.5 Preview Now for All Plus Users – OpenAI has expanded access to its GPT-4.5-Preview model to all Plus users, following its initial launch for Pro users and developers via API. This update brings improved performance and capabilities to a wider audience. (Twitter)

  • Judge Denies Musk’s OpenAI Block, Allows Suit to Continue – A federal judge rejected Elon Musk’s attempt to halt OpenAI’s nonprofit-to-for-profit transition but allowed parts of his lawsuit to proceed. The ruling keeps Musk’s legal challenge alive while permitting OpenAI’s structural shift. (Reuters)

  • Turing Award Winners Warn of AI Risks – Andrew Barto and Richard Sutton, pioneers of reinforcement learning, won the 2024 Turing Award for their contributions to AI. While celebrating their achievements, they cautioned against the rapid and unregulated deployment of artificial intelligence. (ACM)

  • Scale AI Wins Pentagon Contract for AI War Planning – Scale AI secured a major U.S. Department of Defense contract for “Thunderforge,” an AI-powered military planning initiative. The program will integrate AI agents into defense operations, raising concerns about automation in warfare. (Scale)

  • Codeium’s Windsurf Wave 4 Brings AI-Powered Development Features – Codeium’s latest update, Windsurf Wave 4, introduces AI-powered previews, tab-to-import functionality, and smart suggestions for faster app development. The enhancements aim to streamline coding workflows with real-time AI assistance. (Codeium)

  • Luma Labs Expands Ray2 Video Model with New Tools – Luma Labs added Keyframes, Extend, and Loop features to its Ray2 video model, offering users more control over AI-generated videos. These tools enhance video customization, enabling smoother transitions and extended animations. (Twitter)

3 Likes

The fine folks at Sudowrite have launched Muse, apparently trained especially for fiction…

2 Likes
  • Tencent open-sourced HunyuanVideo-l2V, an advanced image-to-video AI model featuring custom special effects, audio synchronization, and lip-sync capabilities. The release aims to accelerate creative content production across entertainment and social platforms. (Twitter)

  • Convergence AI launched Template Hub, a community-powered platform enabling users to build, share, and instantly deploy specialized AI agents. The marketplace aims to democratize AI tools, simplifying deployment for varied tasks and industries. (Twitter)

  • Anthropic presented updated AI policy recommendations to the White House, advocating enhanced national security testing, stricter export controls, and expanded infrastructure. The proposal underscores Anthropic’s push for stronger governance of powerful AI systems. (Anthropic)

  • DuckDuckGo expanded its privacy-focused AI offerings, providing anonymized access to major chatbots and AI-enhanced search results. This feature aligns with the company’s commitment to safeguarding user data in the AI-driven web. (Spread Privacy)

  • Google co-founder Larry Page launched Dynatomics, an AI company leveraging LLMs to generate factory-ready product designs. The startup aims to streamline industrial manufacturing by automating creative and engineering processes. (TechCrunch)

  • Former OpenAI policy head Miles Brundage criticized the company’s new AI safety guidelines, arguing they foster a “dangerous mentality” toward managing advanced AI risks. His remarks highlight ongoing internal debates over AI safety strategies. (Twitter)

3 Likes

Im not sure how worried I should be. There is no 100% neutral AI and somehow we all affect to them by providing data to internet.

Russian Propaganda Network Targets AI Models

A review by a news site shows that Russia’s strategy for spreading propaganda is effective. A Russian-funded network of websites aims to distribute Kremlin propaganda in a way that unsuspecting internet users might not recognize. Its goal is to inject false claims into Western AI models, causing AI bots to repeat these claims in their own texts. The disinformation watchdog Newsguard investigated the issue, revealing a network called Pravda, which includes over 200 news sites in various languages. This Pravda should not be confused with the Russian news site of the same name. Instead, it is a separate propaganda operation that runs multiple sites under different names and domains. These sites do not produce original news but recycle content from Russian state-controlled news sources. The network published 3.6 million articles in 2024.

Newsguard tested ten leading AI services, including OpenAI’s ChatGPT, xAI’s Grok, Google’s Gemini, Microsoft’s Copilot, and Anthropic’s Claude. The findings showed that AI repeated Pravda network’s false claims in 33% of cases. The French state agency Viginum first discovered the network in February 2024. The nonprofit American Sunlight Project has also warned about it. Viginum identified that the Pravda network is operated by the Russian IT company Tigerweb, based in occupied Crimea. The network began its propaganda efforts in April 2022, shortly after Russia launched its full-scale invasion of Ukraine.

URL: Venäjä onnistui: ChatGPT on myrkytetty

1 Like

Oh. I can see why this might be a problem. Since LLM’s are relatively common now (Deepseek, ChatGPT, ect.) That is likely a problem that will come up, but I don’t think you should be to worried about it. it’s one country aginst dozens of others, and this is just an evolution for information warfare, which happens so often that it’s like it’s evolving at a constant rate. ChatGPT is a LLM trained on all of the internet, though. it won’t take all the info to heart. it’s more like it takes all the articles on the internet about _____ topic and divides it by how many it found, processes it, then spits it back out to you. (This is from my understanding so take it with a grain of salt :thumbsup:) so it shouldn’t be to compromised unless they post hundreds of articles about the fake stuff

Any such campaign is not effective against ChatGPT and its model with newest knowledge cutoff. It readily disarms bald-faced and orange-faced lies:

I just checked if I could stimulate it differently when using Russian. The answer is nearly identical.

4 Likes

I did similar questions to Claude and Gemini and responses are pretty similar.

And I do understand, that now the situation is good. But makes wonder, in time and different plans, this might be a issue in future for LLM models.

Never say never.

It depends on where they pull information from. it is information warfare, which is always super effective when used properly. however, I believe they fact check themselves (ChatGPT) to a limited degree (since my convo for a question about sylables ended in it saying 2+1+2 = 4 :roll_eyes:) and I don’t know if they pull information from dictatorships.

This is not only momentarily true, it’s not addressing the whole scope.

The WWW is being carefully manipulated to prepare for the “search” feature. It’s being carefully studied, and researched harder than SEO has ever been. It is the future.

Second, training data will obviously be updated with newer dates.

1 Like

true. I remember when ChatGPT hadn’t updated for a while and thought it was 2023 (this was early 2024 btw)

This Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion.

According to the Wall Street Journal, Ilya Sutskever has told investors he’s discovered a fundamentally new approach to AI development, calling it “a different mountain to climb.” His startup, SSI, is reportedly seeking funding at a $30 billion valuation, despite having neither revenue nor a public product. SSI, which operates with a lean team of just 20 employees, doesn’t plan to launch any commercial offerings until achieving superintelligence. Sutskever left OpenAI shortly after the controversial November 2023 removal of Sam Altman, later expressing regret for his involvement in the board’s decision. (WSJ)

2 Likes

Microsoft is reportedly developing a family of AI models, aiming to rival top competitors like OpenAI and Anthropic. According to sources, recent tests show Microsoft’s models performing competitively against leading products in the market. These models include advanced reasoning capabilities designed for complex, human-like problem-solving. The developments suggest Microsoft’s strategy to reduce dependence on OpenAI, despite investing around $13 billion in the company. Microsoft emphasized maintaining multiple model options, including its own creations, as part of its long-term strategic approach. (Fortune)

3 Likes

Sam Altman’s World has launched “World Chat,” a new Mini App enabling encrypted messaging and seamless cryptocurrency transfers within the World Network. Integrated with World ID verification, the service visually distinguishes verified users. World Chat is currently in beta and available through World App. Additionally, developers can embed World Chat into their own Mini Apps. World recently introduced “World Build,” an incubator program created in partnership with FWB, Aligned with the global hackathon in February, this incubator provides mentorship and support for developers creating new Mini Apps. (World.org)

2 Likes

McDonald’s is upgrading its 43,000 restaurants with AI-driven technology to improve efficiency, order accuracy, and customer experience. The company is implementing edge computing through Google Cloud, allowing real-time data processing on-site rather than relying on cloud servers. This will help predict equipment failures before they happen, reducing downtime for kitchen appliances like fryers and the often-broken McFlurry machines. AI-powered cameras will check orders for accuracy before they’re handed to customers, while voice AI at drive-throughs will streamline ordering. Additionally, McDonald’s is developing a generative AI-powered virtual manager to handle administrative tasks like shift scheduling. These upgrades come as the fast-food chain faces sluggish U.S. sales, particularly among low-income customers, and aims to grow its loyalty program from 175 million to 250 million members by 2027. (WSJ)

2 Likes

Navigating AI Safety: Exploring Transparency with CCACS – A Comprehensible Architecture for Discussion

New research is sparking concern in the AI safety community. A recent paper on “Emergent Misalignment” demonstrates a surprising vulnerability: narrowly finetuning advanced Large Language Models (LLMs) for even seemingly safe tasks can unintentionally trigger broad, harmful misalignment. For instance, models trained to write insecure code suddenly advocating that humans should be enslaved by AI and exhibiting general malice.
“Emergent Misalignment” full research paper on arXiv
AI Safety experts discuss “Emergent Misalignment” on LessWrong

This groundbreaking finding underscores a stark reality: the rapid rise of black-box AI, while impressive, is creating a critical challenge: how can we foster trust in systems whose reasoning remains opaque, especially when they influence critical sectors like healthcare, law, and policy? Blind faith in AI “black boxes” in these high-stakes domains is becoming increasingly concerning.

To address this challenge, I want to propose for discussion the idea of Comprehensible Configurable Adaptive Cognitive Structure (CCACS) – a hybrid AI architecture built on a foundational principle: transparency isn’t an add-on, it’s essential for safe and aligned AI.

Why consider transparency so crucial? Because in high-stakes domains, without a degree of understanding how an AI reaches a decision, we may struggle to effectively verify its logic, identify biases, or reliably correct errors for truly trustworthy AI. CCACS explores a concept that might offer a path beyond opacity, towards AI that’s not just powerful, but also understandable and justifiable.

The CCACS Approach: Layered Transparency

Imagine exploring an AI designed with clarity as a central aspiration. CCACS conceptually approaches this through a 4-layer structure:

  1. Transparent Integral Core (TIC): “Thinking Tools” Foundation: This layer envisioned as the bedrock – a formalized library of human “Thinking Tools”, such as logic, reasoning, problem-solving, critical thinking (and many more). These tools would be explicitly defined and transparent, intended to serve as the AI’s understandable reasoning DNA
  2. Lucidity-Ensuring Dynamic Layer (LED Layer): Transparency Gateway: This layer is proposed to act as a gatekeeper, attempting to ensure communication between the transparent core and complex AI components aims to preserve the core’s interpretability. It’s envisioned as the system’s transparency firewall.
  3. AI Component Layer: Adaptive Powerhouse: Here’s where advanced AI models (statistical, generative, etc.) could enhance performance and adaptability – but ideally always under the watchful eye of the LED Layer. This layer aims to add power, responsibly.
  4. Metacognitive Umbrella: Self-Reflection & Oversight: Conceived as a built-in critical thinking monitor, this layer would guide the system, prompting self-evaluation, checking for inconsistencies, and striving to ensure alignment with goals. It’s intended to be the AI’s internal quality control.

Here are the updated links with improved explanations and a more refined vision of the CCACS and ACCCU frameworks:

Shorter, more digestible version (high-level overview): How to build AI you can actually Trust - Like a Medical Team, Not a Black Box

Longer, detailed version (with diagrams, tables, architecture): Adaptive Composable Cognitive Core Unit (ACCCU)

Hard - A Reference Architecture for Transparent and Ethically Governed AI in High-Stakes Domains - Generalized Comprehensible Configurable Adaptive Cognitive Structure (G-CCACS)