AI Pulse Edition #2: Latest AI News Updates for the Developer Community

AI Pulse – Edition #2

In this edition of AI Pulse, we zoom into OpenAI’s groundbreaking o1-preview and o1-mini reasoning models, which push the boundaries of complex problem-solving. We also highlight the latest efforts by Google, Apple, and Mistral to shape the AI landscape with the release of DataGemma, Apple’s new personal intelligence system, and Mistral’s Pixtral 12B multimodal model. Additionally, we examine initiatives by governments across Europe, the U.S., Asia, and the Middle East to create essential AI infrastructure and ecosystems. This includes the European Commission’s launch of AI Factories and the White House’s establishment of a new AI Datacenter Task Force in collaboration with major industry players, alongside ongoing efforts to address AI risks. In the realm of AI economics, we cover the latest updates around the potential removal of OpenAI’s profit cap to secure a $150 billion valuation and Ilya Sutskever’s funding efforts for his new startup SSI while our research highlights showcase AI’s significant impact on programming productivity and GPT-4’s potential to reduce belief in conspiracy theories.

Enjoy the read! Your AI Pulse Team
@platypus @trenton.dambrowitz @dignity_for_all @PaulBellow @vb @jr.2509

Interested in contributing? Drop us a note!


Table of contents

1. Technology Updates
2. Infrastructure Initiatives
3. Government & Policy Initiatives
4. Legal Matters
5. AI Economics
6. Research
7. Dev Alerts


1. Technology Updates

OpenAI’s Next Leap: OpenAI is pushing the boundaries of AI with its new reasoning models o1-preview and o1-mini, which are designed for complex problem-solving, featuring unique reasoning tokens for internal “thought” processes and up to 65,536 output tokens

Details

OpenAI has released its much anticipated new reasoning models. Dubbed, o1-preview and o1-mini, the new models are designed to tackle complex problems in science, coding, and mathematics as well as other reasoning tasks and broader philosophical questions. Both models are trained with reinforcement learning to perform complex chain-of-thought reasoning and can generate an internal chain of thought before responding to the user. Specifically, they introduce the concept of reasoning tokens, which they use internally to “think,” breaking down their understanding of the prompt and considering multiple approaches before generating a response. While these reasoning tokens are not visible via the API, they occupy space in the model’s context window and are billed as output tokens. Both o1-preview and o1-mini models offer a context window of 128,000 tokens, with maximum output token limits of up to 32,768 and 65,536 tokens, respectively, whereby input tokens are calculated the same way as for gpt-4o, using the same tokenizer. While still in beta, the models come with several limitations including the absence of common chat completion API parameters, such as system messages and core hyperparameter settings, streaming, and function calling as well as a lack of support for images. Additionally, only Tier 5 users can currently access the model via API while access via ChatGPT Plus is limited to now 50 messages per week for o1-preview, which reset every 7 days, and 50 messages per day for o1-mini. In an AMA held on X, OpenAI staff however confirmed plans for several enhancements including access to more tiers, higher rate limits and improvements in latency, larger input contexts, multimodal capabilities, support for streaming, function calling, code interpreter, and browsing as well as structured outputs. The models are also planned to become available under the batch API and eventually also for fine-tuning. While there are currently no plans to reveal reasoning tokens, the team indicated plans to enable greater control over thinking time and is exploring the option to pause reasoning during inference. The team also revealed that the icon of o1 is metaphorically an alien of extraordinary ability. Source: OpenAI

OpenAI’s API Journey: OpenAI’s API tech lead Michelle Pokrass reveals on the Latent Space podcast how new features like Structured Outputs and JSON mode are enhancing AI model reliability and developer efficiency.

Details

Michelle Pokrass, the tech lead for the API at OpenAI—whose extensive background spans roles at Google, Stripe, Coinbase, Clubhouse, and co-founding Readwise—joined the Latent Space podcast to discuss her journey and the latest advancements in OpenAI’s API platform. The conversation involved a deep dive into the development of Structured Outputs and JSON mode. Pokrass explained the differences between these two modes, highlighting how they enable more precise and efficient interactions with AI models as well as discussed the implementation of the refusal field, limitations within the HTTP specification, and the significance of function calling in building advanced AI applications. She also addressed the evolution of the Assistant’s API, strategies for fine-tuning, and challenges associated with determinism. Finally, Pokrass also shared insights into OpenAI’s approach to engaging with developers and reflected on her own journey at the company, emphasizing the qualities OpenAI seeks when hiring. As the “strawberry on the cake”, the podcast episode also features an ad-hoc follow-up with the broader OpenAI team amid the o1 release. Source: Latent Space

Google’s DataGemma: Google ships DataGemma, open models leveraging real-world data from Data Commons to reduce LLM hallucinations and enhance factual accuracy using RIG and RAG techniques.

Details

Google has announced DataGemma, an experimental set of open models designed to reduce hallucinations in large language models (LLMs) by grounding them in real-world statistical data from Google’s Data Commons. Data Commons is a publicly available knowledge graph containing over 250 billion data points sourced from trusted organizations like the United Nations and the World Health Organization. DataGemma models leverage this data using techniques such as Retrieval Interleaved Generation (RIG) and Retrieval Augmented Generation (RAG) to enhance the factual accuracy of LLM responses. Source: Google

Apple Intelligence - Personal AI Right on Your Device: Apple introduces Apple Intelligence, a personal AI system integrating generative models with personal context directly on devices, offering features like Writing Tools and an enhanced Siri across iOS 18, iPadOS 18, and macOS Sequoia.

Details

Apple has announced the rollout of Apple Intelligence, its new personal intelligence system that combines generative models with personal context, on its iPhone, iPad, and Mac devices starting next month. The system, which will initially be available in U.S. English, will be integrated into iOS 18, iPadOS 18, and macOS Sequoia. It will utilize Apple silicon to understand and create language and images, take action across apps, and draw from personal context to simplify and accelerate everyday tasks. The first set of features will include Writing Tools for refining text, a Clean Up tool in Photos for removing distracting objects, and summarized notifications across apps. Additionally, Siri is set to become more natural, flexible, and deeply integrated into the system experience. Source: Apple

Mistral AI’s First Multimodal Model: Mistral AI releases Pixtral 12B, its first 12-billion-parameter multimodal model capable of advanced image understanding and text processing, enabling tasks like image captioning and object counting

Details

Mistral AI has released Pixtral 12B, a 12-billion-parameter multimodal model capable of processing both text and images. Building upon Mistral’s text model Nemo 12B, Pixtral 12B enables advanced image understanding tasks such as captioning images and counting objects within photos. The model accepts images of arbitrary size and quantity, provided via URLs or base64 encoding, allowing it to answer questions about multiple images in a single prompt. At approximately 24GB in size, Pixtral 12B is accessible through the vLLM library, with Mistral AI providing recommended settings and usage examples for both simple and advanced scenarios. Source: Hugging Face Source: TechCrunch

Matt Shumer Response After Reflection 70B Controversy: Following controversy over unreplicable results of Reflection 70B, OthersideAI CEO Matt Shumer addresses concerns and apologizes, clarifying issues regarding the model’s authenticity and training methods.

Details

Matt Shumer, co-founder and CEO of OthersideAI—known for its AI writing assistant HyperWrite—has issued an apology following fraud accusations concerning his newly released large language model (LLM), Reflection 70B. Launched on September 5 and touted as “the world’s top open-source model,” Reflection 70B was claimed to achieve state-of-the-art results on benchmarks through a technique called “Reflection Tuning,” which purportedly allows the model to assess and refine its responses for correctness. However, independent researchers were unable to replicate these impressive results, with some suggesting that the model might be a variant or wrapper of Anthropic’s Claude 3.5 Sonnet model. Shumer, who did not initially disclose his investment in Glaive AI—the synthetic data generation platform used to train Reflection 70B—attributed the discrepancies to issues during the model’s upload process but has yet to provide detailed explanations or corrected model weights. Source: VentureBeat

2. Infrastructure Initiatives

EU’s AI Factories Initiative: The European Commission launches the AI Factories initiative, leveraging HPC supercomputers to empower AI developers and researchers across Europe with resources to train large generative AI models.

Details

The European Commission has launched a call for proposals to establish AI Factories. The AI Factories will leverage the EU’s network of High-Performance Computing (HPC) supercomputers to provide AI developers, startups, industry, and researchers with access to computing power, data, and talent needed to train large generative AI models. The AI Factories are part of the Commission’s AI innovation package presented in January 2024, which includes financial support expected to generate an additional €4 billion in public and private investments by 2027. The package also encompasses initiatives to strengthen the EU’s generative AI talent pool.

Source: European Commission

Green Light for Intel AI Chip Plant in Poland: Intel receives EU approval for over €1.7 billion in state aid to build a new AI-focused semiconductor plant in Poland, boosting the country’s ambitions to become an “AI Valley.”

Details

Intel has received approval from the European Commission for Poland to provide over €1.7 billion in state aid for a new semiconductor chip assembly and testing plant near Wrocław. The new facility will complement Intel’s existing and planned fabrication plants in Ireland and Germany, playing a crucial role in producing chips essential for AI technologies. This expansion aligns with Poland’s ambition to become an “AI Valley,” attracting major tech investments and enhancing its position in the AI industry. Source: Notes from Poland

White House’s AI Datacenter Push: The White House forms a new Task Force on AI Datacenter Infrastructure to accelerate U.S. AI datacenter and power infrastructure development, including initiatives like leveraging retired coal sites for datacenter growth.

Details

The White House has announced the formation of a new Task Force on AI Datacenter Infrastructure following a roundtable with leaders from major AI companies, datacenter operators, and utility firms. Led by the National Economic Council, National Security Council, and the Deputy Chief of Staff’s office, the Task Force aims to coordinate policy across government to accelerate the development of AI datacenters and power infrastructure in the United States. Additional initiatives include scaling up technical assistance for datacenter permitting, creating an AI datacenter engagement team at the Department of Energy (DOE), and leveraging retired coal sites for datacenter development. Source: The White Houes

Saudi Arabia’s GenAI Ambition: Saudi Arabia’s Saudi Data and Artificial Intelligence Authority partners with Microsoft to establish a Center of Excellence to accelerate genAI innovation and launch the Microsoft AI Academy, focusing on Arabic large language models and developing national AI expertise.

Details

The Saudi Data and Artificial Intelligence Authority (SDAIA) and Microsoft have signed a Memorandum of Understanding (MoU) to establish a joint Center of Excellence aimed at accelerating innovation in genAI, with a special focus on Arabic large language models. The collaboration will see the establishment of the Microsoft AI Academy in partnership with the SDAIA Academy with the goal to develop national AI expertise through certification programs and competitions Additionally, SDAIA’s Arabic large language model, ALLaM, will become generally available on Microsoft Azure. Source: Saudi Data and Artificial Intelligence Authority

AI Giants Scramble for GPUs: Oracle’s Larry Ellison and xAI’s Elon Musk are begging for more Nvidia GPUs from CEO Jensen Huang to power their massive AI superclusters, including Oracle’s planned Zettascale AI supercluster requiring 131,072 Nvidia GB200 NVL72 Blackwell GPUs to deliver 2.4 ZettaFLOPS of AI performance.

Details

Oracle founder Larry Ellison and xAI’s Elon Musk have reportedly pleaded with Nvidia CEO Jensen Huang for more of the company’s GPUs to power their respective AI superclusters. Oracle recently announced plans to create a Zettascale AI supercluster, composed of 131,072 Nvidia GB200 NVL72 Blackwell GPUs, which will deliver 2.4 ZettaFLOPS of AI performance. This surpasses the power of xAI’s Memphis Supercluster, which currently utilizes 100,000 Nvidia H100 AI GPUs. Oracle’s ambitious AI plans require significant power, leading the company to secure permits for three modular nuclear reactors to meet its facilities’ electrical needs. Despite being smaller than other tech giants offering data center services, Oracle Cloud Infrastructure (OCI) is pushing its investments in AI, with Ellison predicting that frontier AI models in the next three years will cost $100 billion to train. Source: Tom’s Hardware

Japan’s Rural AI Revolution: Japan invests in photonics technology to enable rural AI data centers with high-speed, low-power light-based communications, decentralizing infrastructure and fostering regional tech growth.

Details

Japan’s government is initiating support for the development of photonics-electronics convergence technology to facilitate the construction of data centers in rural areas. Recognizing the growing demands of generative artificial intelligence and autonomous vehicles, this initiative aims to create high-speed, light-based communications networks capable of handling vast data flows with significantly lower power consumption. By processing and transmitting data as light rather than electrical signals, photonics technology enables the decentralization of data centers, reducing reliance on urban infrastructure and mitigating disaster recovery risks. The government’s plan includes incentives for establishing data centers in regions like Hokkaido, which offers abundant renewable energy resources. Source: w.media

3. Government & Policy Initiatives

Europe’s AI Ethics Blueprint: The European Commission signs the Council of Europe Framework Convention to enhance AI ethics and mitigate risks to human rights, democracy, and the rule of law with human-centric, risk-based principles.

Details

The European Commission has signed the Council of Europe Framework Convention on AI on behalf of the European Union. The Convention aims to address risks posed by AI to human rights, democracy, and the rule of law, and applies to activities within the lifecycle of AI systems undertaken by public authorities or private actors acting on their behalf. It incorporates several core principles of the EU AI Act, including a human-centric approach to AI, a focus on risk-based strategies, and provisions for trustworthy AI, such as transparency, robustness, safety, and data governance. It also supports AI innovation through regulatory sandboxes. Source: European Commission

Australia’s Push for AI Safeguards: Australia introduces a Voluntary AI Safety Standard with ten guardrails on transparency and accountability, and proposes mandatory regulations for high-risk AI to strengthen safeguards.

Details

The Australian Government has released a new Voluntary AI Safety Standard and a Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings. The Voluntary AI Safety Standard, effective immediately, provides practical guidance for businesses to safely develop and deploy AI systems, including ten voluntary guardrails that address transparency, accountability, and risk management across the AI supply chain. The Proposals Paper outlines a proposed definition of high-risk AI, the ten proposed mandatory guardrails, and three regulatory options to enforce these guardrails. Source: Australian Government

Crafting Global Military AI Guidelines: South Korea hosts a summit with over 90 nations to craft global guidelines for responsible military AI use, aiming to prevent abuse and ensure human oversight over autonomous weapons.

Details

South Korea has convened an international summit in Seoul, gathering representatives from over 90 nations, including the United States and China, to establish a blueprint for the responsible use of AI in the military. The two-day summit aimed to set minimum guardrails for military AI applications amid growing concerns over the potential for abuse, highlighted by recent developments such as AI-enabled drones in the Russia-Ukraine conflict. Discussions centered on legal measures to ensure compliance with international law and mechanisms to prevent autonomous weapons from making life-and-death decisions without appropriate human oversight. Source: Reuters

Russia’s Election AI Debate: Russian lawmaker Anton Gorelkin proposes banning AI in election campaigning to prevent manipulation and calls for legislation to define deepfakes and bolster detection tools.

Details

State Duma deputy Anton Gorelkin has proposed that Russia consider banning the use of AI in election campaigning. Highlighting the high risk of manipulative technologies in elections, he suggested that a complete ban on AI could help mitigate these risks. Speaking at the Information Center of the Central Election Commission, Gorelkin also called for legislation to define the concept and types of deepfakes, and to equip regulatory authorities with tools to detect them. Source: RIA Novosti

4. Legal Matters

Meta Reboots UK AI Training: Meta resumes AI training in the UK using public Facebook and Instagram posts, addressing regulatory concerns by simplifying user opt-out processes and excluding private messages and underage data.

Details

Meta Platforms will resume training its AI models in the UK using public content from Facebook and Instagram, after previously pausing due to regulatory concerns. The company will utilize public posts—including photos, captions, and comments—to train its genAI models, excluding private messages and data from users under 18. Following exchanges with the UK’s Information Commissioner’s Office (ICO), Meta has simplified the process for users to object to their data being used and will send in-app notifications explaining the procedure. The move comes after Meta addressed regulatory backlash and adapted its approach to comply with privacy and transparency requirements. Source: Reuters

AI-Fueled Fraud: North Carolina musician Michael Smith is charged with a $10 million fraud for using AI to create songs and bots to stream them billions of times, manipulating royalties on major music platforms.

Details

A North Carolina musician, Michael Smith, has been charged by the U.S. Department of Justice for orchestrating an AI-enabled music streaming fraud scheme. Smith allegedly used AI to create hundreds of thousands of songs and deployed automated bots to stream AI-generated songs billions of times on platforms such as Amazon Music, Apple Music, Spotify, and YouTube Music, allowing him to fraudulently obtain over $10 million in royalty payments. The scheme involved creating numerous fake accounts to stream the songs continuously, spreading the streams across thousands of tracks to avoid detection. Smith was arrested and now faces charges of wire fraud conspiracy, wire fraud, and money laundering conspiracy, each carrying a maximum sentence of 20 years in prison. Source: U.S. Department of Justice

5. AI Economics

OpenAI’s Profit Cap Under Review: OpenAI’s $150B valuation hinges on a major corporate overhaul, including the potential removal of the existing profit cap to attract more investment, as it seeks to raise $6.5 billion in new funding with participation from existing and new investors.

Details

OpenAI is reportedly in advanced talks to raise $6.5 billion in new funding through convertible notes, which could value the company at $150 billion contingent upon significant corporate restructuring. The restructuring would involve removing the existing profit cap for investors and potentially shifting from its current non-profit governance model to a for-profit benefit corporation. This change aims to attract more investment to fund OpenAI’s pursuit of AGI. The removal of the profit cap would require approval from OpenAI’s non-profit board. Existing investors including Thrive Capital, Khosla Ventures, and Microsoft are expected to participate in the funding round, along with new investors such as Nvidia and Apple. Source: Reuters

$1 Billion Boost for Safe Superintelligence: Safe Superintelligence (SSI), co-founded by OpenAI’s Ilya Sutskever, raises $1 billion to develop AI systems surpassing human capabilities while ensuring safety.

Details

Safe Superintelligence (SSI), the new startup co-founded by OpenAI’s former chief scientist Ilya Sutskever, has raised $1 billion in funding to develop safe AI systems that surpass human capabilities. The funds will be used to acquire computing power and hire top talent, with a focus on building a small, highly trusted team of researchers and engineers. The start-up, which is reportedly valued at $5 billion, aims to spend a couple of years on R&D before bringing its product to market. SSI’s mission is to ensure AI safety, a topic of increasing concern amid fears of rogue AI acting against humanity’s interests. The company plans to partner with cloud providers and chip companies to fund its computing power needs. Source: Reuters

Nvidia x Sakana AI: Nvidia becomes a major shareholder in Tokyo-based Sakana AI with a $100 million investment, boosting Japan’s AI innovation through sustainable, nature-inspired technologies.

Details

Nvidia has become a major shareholder in Sakana AI, a Tokyo-based generative AI startup, following a $100 million Series A funding round. The investment marks Nvidia’s increased involvement in Japan’s AI ecosystem and its commitment to advancing AI innovation in the region. Sakana AI, which focuses on evolutionary optimization, foundation models, and nature-inspired intelligence, aims to develop sustainable AI technologies to address Japan’s demographic decline and need for technological competitiveness. Source: All about AI

Walmart’s GenAI Transformation: Walmart leverages generative AI to enhance data quality and customer experience, improving its product catalog and empowering employees with AI tools.

Details

Walmart is utilizing genAI to improve the data quality of its product catalog in an effort to enhance the customer experience (CX). The retail giant’s genAI application is enabling quicker access to product images and information for both associates and customers. Walmart’s AI assistant will soon provide follow-up answers to customer queries, assisting them in finding the right product. The company has used large language models to create or improve over 850 million pieces of data across its product catalog, a process that would have required 100 times the workforce without genAI. The improved data quality is enhancing in-store operations, allowing associates to quickly locate inventory and deliver orders. Walmart has also expanded its internal genAI adoption, providing an additional 25,000 employees access to a proprietary genAI tool. Source: CIO Dive

17 Likes

6. Research

AI Supercharges Coders: Studies reveal AI tools like GitHub Copilot boost programmer productivity by 26%, with junior developers benefiting most from AI-assisted coding.

Details

Several new studies have been investigating the impact of genAI on programming. An analysis of three randomized controlled trials involving 4,867 developers at Microsoft, Accenture, and a Fortune 100 electronics manufacturing company found that those with access to GitHub Copilot—completed 26.08% more tasks than those without the tool. Similarly, a working paper published by the Bank for International Settlements reported on a field experiment using CodeFuse, a large language model developed by Ant Group to assist programming teams. Findings highlighted a significant increase in productivity, measured by the number of lines of code produced, which was most pronounced among junior staff as a result of their more active engagement with the tool. Source: SSRN, Source: Bank for International Settlements

AI vs. Conspiracies: An AI chatbot powered by GPT-4 successfully debunks conspiracy theories, reducing believers’ conviction by 21% and sustaining the change over time.

Details

Researchers have developed an AI chatbot using OpenAI’s GPT-4-Turbo that can debunk conspiracy theories and influence people to reconsider their beliefs. The study involved participants interacting with the chatbot, which provided detailed counterarguments to their conspiracy theories. The researchers recruited over 1,000 participants, who were asked to describe a conspiracy theory they believed in and rate their conviction. After interacting with the chatbot, participants’ confidence in their chosen conspiracy theory decreased by an average of 21%. A follow-up survey two months later showed that the shift in perspective had persisted for many participants. Source: Nature.com

AI in Finance: Financial regulators in New Zealand, Canada, and Sweden explore AI adoption in finance, with studies showing proactive integration and positive reception to AI-driven services.

Details

New Zealand’s Financial Markets Authority (FMA) has published new research on the use of AI in financial services, revealing a cautious yet proactive approach among firms towards integrating AI technologies. Based on responses from 13 financial service providers across asset management, banking, financial advice, and insurance, the research indicates that all participants are either currently using AI or plan to adopt it soon, motivated by goals such as enhancing customer outcomes, improving operational efficiency, and strengthening fraud detection and risk management. Concurrently, Canada’s Ontario Securities Commission (OSC) has published a report on how AI influences retail investor decision-making, based on a behavioural science experiment involving an online investment simulation where participants were given a hypothetical $20,000 to invest and received suggestions from either a human financial services provider, an AI tool, or a combination of both. Findings indicate the Canadian are receptive to AI-driven financial advice. Meanwhile, the Swedish Financial Services Authority (FI) has initiated a study to map out AI utilization in Sweden’s financial sector to deepen the understanding current and anticipated AI use cases, firms’ perceptions of related benefits and challenges as well as the expected impact of the EU AI Act on their operations. Source: New Zealand Financial Markets Authority, Source: Ontario Securities Commission, Source: Swedish Financial Services Authority

7. Dev Alerts

Don’t miss these deprecation and shutdown dates

  • As of September 13, gpt-3.5-turbo-0613 and gpt-3.5-turbo-16k-0613 were officially shutdown. Other model snapchats including gpt-3.5-turbo-0125, gpt-3.5-turbo-1106 remain available.

  • New fine-tuning training runs on babbage-002 and davinci-002 will no longer be supported from October 28, 2024. and developers are recommended to switch to gpt-4o-mini. Existing fine-tunes babbage-002 and davinci-002 will remain accessible.

  • Both gpt-4-vision-preview and gpt-4-1106-vision-preview will be shut down on December 6, 2024. Recommended replacement is gpt-4o.

Explore o1 use cases with two new OpenAI cookbook additions

  • Using reasoning for data validation: The example covers how to use o1-preview to perform data validation on a synthetic medical dataset, demonstrating the process of generating data with intentional errors, validating its accuracy using the model, and analyzing the results to assess the model’s precision and recall in identifying and explaining data issues. Link

  • Using reasoning for routine generation: This example covers the process of using 01-preview for converting customer service knowledge base articles into structured, executable routines for a language model to automate customer interactions efficiently. Link

Find your OpenAI dream role

OpenAI currently has 187 open positions across San Francisco and several other key locations including New York City, London, Tokyo, Dublin, Seattle, and Singapore. Available roles span across all areas with major concentrations in Applied AI, Research, and Go-To-Market.

Get all the insights here
  1. Global Reach and Diverse Locations:
  • International Expansion: Positions are available in major cities like San Francisco, New York City, London, Tokyo, Dublin, Seattle, and **Singapore highlight OpenAI’s drive to expand its international footprint.

  • Remote Opportunities: Several roles are open to remote candidates, reflecting OpenAI’s flexible work culture and adaptation to the evolving workplace landscape.

  1. Dominance of Engineering and Research Roles:
  • Engineering Focus: A significant portion of the openings are in Applied AI Engineering, Platform Engineering, Research, and Security.

  • Specialized Positions: Several highly specialized roles are available like GPU Kernels Engineer, HW/SW Co-Design Engineer, and Distributed Training Engineer.

  1. Strengthening Business and Customer Relations:
  • Sales and Account Management: Positions such as Account Director, Sales Leader, and Customer Success Manager indicate efforts to expand client engagement and market reach.

  • Global Affairs and Partnerships: Roles like Japan Policy & Partnerships Lead and Senior Manager, Partner Management emphasize building strategic alliances and navigating regional regulations.

  1. Investment in Human-Centric AI Development:
  • Human Data and Alignment: Jobs in Human Data and Alignment departments suggest a focus on refining AI models to be more aligned with human values and ethics.

  • Safety and Security: Positions like Research Scientist, Model Safety and Security Engineer, Detection & Response highlight a commitment to developing safe and secure AI systems.

  1. Operational and Organizational Growth:
  • People and Workplace Management: Roles such as Benefits Operations Specialist and Workplace Operations Manager reflect efforts to enhance employee experience and operational efficiency.

  • Finance and Legal: Openings for Revenue Accounting Manager, Tax Director, and Head of Internal Controls indicate a scaling of financial operations and compliance functions.

  1. Enhancing Communication and Public Engagement:
  • Communications and Marketing: Positions in Media Relations, Digital Marketing Lead, and Events Operations Manager show a drive to amplify OpenAI’s public presence and stakeholder engagement.

  • Design and User Experience: Roles like Design Engineer, Communications Design and Front End Software Engineer focus on improving user interfaces and overall experience.

8 Likes

Would be cool to confirm if the developers in our community also ‘supercharged’ their productivity using AI?

I think that’s a pretty big word…

6 Likes

I’m merely turbocharged. Let’s do some context switching…

6: Where the Journalist AI doesn’t fall for the press-release hype, doesn’t fabricate, and is not a promotion machine.

A recent study from the Bank for International Settlements explores the productivity impacts of generative AI on coding. The research focused on CodeFuse, an AI developed by Ant Group, which was introduced to certain programmers while others remained as a control group. Results indicated a significant 55% increase in productivity among those using the AI, primarily due to its direct code contributions and enhanced efficiency in other tasks. Interestingly, the productivity gains were markedly higher among junior programmers compared to their senior counterparts, attributed to the latter’s lower engagement with the AI tool. This highlights potential generational divides in adopting new technologies within professional settings.

The study also examined the specific functionalities of CodeFuse that contributed to enhanced productivity. The AI notably assisted with debugging, reducing error rates and the time typically spent troubleshooting. Furthermore, CodeFuse provided real-time coding suggestions, which streamlined the coding process and improved the quality of code. This suggests that AI tools like CodeFuse not only speed up individual tasks but also enhance overall code integrity, which could be a significant advantage in scaling tech projects. For more details, you can access the full study here.

The study mentioned, conducted by the Bank for International Settlements (BIS), is part of its broader mission to support monetary and financial stability through innovation and international cooperation. It’s important to note that the findings of the study are not peer-reviewed, meaning they haven’t been evaluated by independent experts to the same rigorous standard as peer-reviewed research. This could influence the reliability and generalizability of the results. The BIS, established in 1930 and owned by 63 central banks representing about 95% of the world GDP, plays a significant role in global financial infrastructure, with its headquarters in Basel, Switzerland. For more information on BIS and its initiatives, you can visit their official website.

4 Likes

I’ve heard a lot of “I saved X hours with o1…” lately.

4 Likes

OK, forget DataGemma or RIG (retrieval interleaved augmentation, essentially just mini-functions lol)

I didn’t know about DataCommons - and it looks like it’s free :face_with_monocle: - if it does what it says on the tin, that’d be massive.

4 Likes

Things are moving so fast these days. It’s one of the reasons we wanted to try to do a good roundup.

2 Likes

If you want to join the team, learn more here…

3 Likes

This is not official (yet), but I wanted to try out Google’s Notebook LLM. I was pleasantly surprised. I took the text of in the first two posts and made this…

The tool is here…

https://notebooklm.google.com

… super simple to get the audio… then a few more steps to animate…

If there’s interest, we might work on making an even better podcast for each edition… Thoughts?

8 Likes

See, while the notebook LLM thing is cool, I would rather see some of us share funny and witty banter / commentary on the news in AI Pulse. I think if it can be made into something fun and entertaining it would do well.

I look at the Vergecast as a good example. You read about what happens on the Verge (or, AI Pulse in this case), and then through their podcast you can learn how people are feeling and thinking about the current events.

Maybe it’s just me but I find a lot a lot of the podcasts I do enjoy come from the people that bring the information to life, not the information itself.

Also I’ve been digging into something like this because me and a friend were going to start our own podcast, but she’s just too busy to do it right now lol. I just don’t think I could a solo podcast because all my favorite ones are with at least 2-3 people in them (even if it’s just an interview with someone that’s still 2 people).

5 Likes

You can call it the AI Pulse Monitor :laughing:

2 Likes

I mean, honestly I’d listen to that :rofl:
The Pulse Monitor would actually fit well

2 Likes

Agree! I mostly wanted to play with the Notebook LLM thing… which was slick…

Are you two volunteering?!?!

R

Seriously, though, I like where you’re going… much more personal and focused on our community here…

Maybe we could just write a script then have AI voice it? I like the idea of two real people tho…

I was kinda blown away with how conversational Notebook LLM did… but still not perfect…

2 Likes

:smirk:

Yeah, it’s definitely impressive I can’t deny that. I gave it a wikipedia on the lesser key of solomon and that was fun lol. It’s a great way to listen to things you don’t have the time to read.

2 Likes

Thank you for this excellent update.

The increase in compute is a game changer that is driving trillion-parameter foundation models forward full throttle.

I’ve been working on the low level architecture of these models at GPU /autoregressive token level.

I see GPT-4o and o1 merging to add o1’s analytical skills to GPT-4o’s creative abilities.

At one point, only a few entities will reach zetta scale AI. Synthetic data through complex AI simulations will become critical.

As such, I see an “o1-data-enhancer” autoregressive data model emerge one of these days.

Thanks for this interesting update.

3 Likes

No problem. It’s definitely a team effort with human + AI in the mix to put it all together. We’re still iterating to make it even better, so stay tuned. :slight_smile:

6 Likes

Golly, you guys have done a FANTASTIC job with this. It’s so thorough. :heart:

5 Likes