Software taketh away faster than hardware giveth: Why C++ programmers keep growing fast despite competition, safety, and AI

2025 was another great year for C++. It shows in the numbers

Before we dive into the data below, let’s put the most important question up front: Why have C++ and Rust been the fastest-growing major programming languages from 2022 to 2025?

Primarily, it’s because throughout the history of computing “software taketh away faster than hardware giveth.” There is enduring demand for efficient languages because our demand for solving ever-larger computing problems consistently outstrips our ability to build greater computing capacity, with no end in sight. [6] Every few years, people wonder whether our hardware is just too fast to be useful, until the future’s next big software demand breaks across the industry in a huge wake-up moment of the kind that iOS delivered in 2007 and ChatGPT delivered in November 2022. AI is only the latest source of demand to squeeze the most performance out of available hardware.

The world’s two biggest computing constraints in 2025

Quick quiz: What are the two biggest constraints on computing growth in 2025? What’s in shortest supply?

Take a moment to answer that yourself before reading on…

— — —

If you answered exactly “power and chips,” you’re right — and in the right order.

Chips are only our #2 bottleneck. It’s well known that the hyperscalars are competing hard to get access to chips. That’s why NVIDIA is now the world’s most valuable company, and TSMC is such a behemoth that it’s our entire world’s greatest single point of failure.

But many people don’t realize: Power is the #1 constraint in 2025. Did you notice that all the recent OpenAI deals were expressed in terms of gigawatts? Let’s consider what three C-level executives said on their most recent earnings calls. [1]

Amy Hood, Microsoft CFO (MSFT earnings call, October 29, 2025):

[Microsoft Azure’s constraint is] not actually being short GPUs and CPUs per se, we were short the space or the power, is the language we use, to put them in.

Andy Jassy, Amazon CEO (AMZN earnings call, October 30, 2025):

[AWS added] more than 3.8 gigawatts of power in the past 12 months, more than any other cloud provider. To put that into perspective, we’re now double the power capacity that AWS was in 2022, and we’re on track to double again by 2027.

Jensen Huang, NVIDIA CEO (NVDA earnings call, November 19, 2025):

The most important thing is, in the end, you still only have 1 gigawatt of power. One gigawatt data centers, 1 gigawatt power. … That 1 gigawatt translates directly. Your performance per watt translates directly, absolutely directly, to your revenues.

That’s why the future is enduringly bright for languages that are efficient in “performance per watt” and “performance per transistor.” The size of computing problems we want to solve has routinely outstripped our computing supply for the past 80 years; I know of no reason why that would change in the next 80 years. [2]

The list of major portable languages that target those key durable metrics is very short: C, C++, and Rust. [3] And so it’s no surprise to see that in 2025 all three continued experiencing healthy growth, but especially C++ and Rust.

Let’s take a look.

The data in 2025: Programming keeps growing by leaps and bounds, and C++ and Rust are growing fastest

Programming is a hot market, and programmers are in long-term high-growth demand. (AI is not changing this, and will not change it; see Appendix.)

“Global developer population trends 2025” (SlashData, 2025) reports that in the past three years the global developer population grew about 50%, from just over 31 million to just over 47 million. (Other sources are consistent with that: IDC forecasts that this growth will continue, to over 57 million developers by 2028. JetBrains reports similar numbers of professional developers; their numbers are smaller because they exclude students and hobbyists.) And which two languages are growing the fastest (highest percentage growth from 2022 to 2025)? Rust, and C++.

Developer population growth 2022-2025

To put C++’s growth in context:

  • Compared to all languages: There are now more C++ developers than the #1 language had just four years ago.
  • Compared to Rust: Each of C++, Python, and Java just added about as many developers in one year as there are Rust total developers in the world.

C++ is a living language whose core job to be done is to make the most of hardware, and it is continually evolving to stay relevant to the changing hardware landscape. The new C++26 standard contains additional support for hardware parallelism on the latest CPUs and GPUs, notably adding more support for SIMD types for intra-CPU vector parallelism, and the std::execution Sender/Receiver model for general multi-CPU and GPU concurrency and parallelism.

But wait — how could this growth be happening? Isn’t C++ “too unsafe to use,” according to a spate of popular press releases and tweets by a small number of loud voices over the past few years?

Let’s tackle that next…

Safety (type/memory safety, functional safety) and security

C++’s rate of security vulnerabilities has been far overblown in the press primarily because some reports are counting only programming language vulnerabilities when those are a smaller minority every year, and because statistics conflate C and C++. Let’s consider those two things separately.

First, the industry’s security problem is mostly not about programming language insecurity.

Year after year, and again in 2025, in the MITRE “CWE Top 25 Most Dangerous Software Weaknesses” (mitre.org, 2025) only three of the top 10 “most dangerous software weaknesses” are related to language safety properties. Of those three, two (out-of-bounds write and out-of-bounds read) are directly and dramatically improved in C++26’s hardened C++ standard library which does bounds-checking for the most widely used bounded operations (see below). And that list is only about software weaknesses, when more and more exploits bypass software entirely.

Why are vulnerabilities increasingly not about language issues, or even about software at all? Because we have been hardening our software; this is why the cost of zero-day exploits has kept rising, from thousands to millions of dollars. So attackers stop pursuing that as much, and switch to target the next slowest animal in the herd. For example, “CrowdStrike 2025 Global Threat Report” (CrowdStrike, 2025) reports that “79% of [cybersecurity intrusion] detections were malware-free,” not involving programming language exploits. Instead, there was huge growth not only in non-language exploits, but even in non-software exploits, including a “442% growth in vishing [voice phishing via phone calls and voice messages] operations between the first and second half of 2024.”

Why go to the trouble of writing an exploit for a use-after-free bug to infect someone’s computer with malware which is getting more expensive every year, when it’s easier to do some cross-site scripting that doesn’t depend on a programming language, and it’s easier still to ignore the software entirely and just convince the user to tell you their password on the phone?

Second, for the subset that is about programming language insecurity, the problem child is C, not C++.

A serious problem is that vulnerability statistics almost always conflate C and C++; it’s very hard to find good public sources that distinguish them. The only reputable public study I know of that distinguished between C and C++ is Mend.io’s as reported in “What are the most secure programming languages?” (Mend.io, 2019). Although the data is from 2019, as you can see the results are consistent across years.

Can’t see the C++ bar? Pinch to zoom. 😉

Although C++’s memory safety has always been much closer to that of other modern popular languages than to that of C, we do have room for improvement and we’re doing even better in the newest C++ standard about to be released, C++26. It delivers two major security improvements, where you can just recompile your code as C++26 and it’s significantly more secure:

  • C++26 eliminates undefined behavior from uninitialized local variables. [4] How needed is this? Well, it directly addresses a Reddit r/cpp complaint posted just today while I was finishing this post: “The production bug that made me care about undefined behavior” (Reddit, December 30, 2025).
  • C++26 adds bounds safety to the C++ standard library in a “hardened” mode that bounds-checks the most widely used bounded operations. “Practical Security in Production” (ACM Queue, November 2025) reports that it has already been used at scale across Apple platforms (including WebKit) and nearly all Google services and Chrome (100s of millions of lines of code) with tiny space and time overhead (fraction of one percent each), and “is projected to prevent 1,000 to 2,000 new bugs annually” at Google alone.

Additionally, C++26 adds functional safety via contracts: preconditions, postconditions, and contract assertions in the language, that programmers can use to check that their programs behave as intended well beyond just memory safety.

Beyond C++26, in the next couple of years I expect to see proposals to:

  • harden more of the standard library
  • remove more undefined behavior by turning it into erroneous behavior, turning it into language-enforced contracts, or forbidding it via subsets that ban unsafe features by default unless we explicitly opt in (aka profiles)

I know of people who’ve been asking for C++ evolution to slow down a little to let compilers and users catch up, something like we did for C++03. But we like all this extra security, too. So, just spitballing here, but hypothetically:

What if we focused C++29, the next release cycle of C++, to only issue-list-level items (bug fixes and polish, not new features) and the above “hardening” list (add more library hardening, remove more language undefined behavior)?

I’m intrigued by this idea, not because security is C++’s #1 burning issue — it isn’t, C++ usage is continuing to grow by leaps and bounds — but because it could address both the “let’s pause to stabilize” and “let’s harden up even more” motivations. Focus is about saying no.

Conclusion

Programming is growing fast. C++ is growing very fast, with a healthy long-term future because it’s deeply aligned with the overarching 80-year trend that computing demand always outstrips supply. C++ is a living language that continually adapts to its environment to fulfill its core mission, tracking what developers need to make the most of hardware.

And it shows in the numbers.

Here’s to C++’s great 2025, and its rosy outlook in 2026! I hope you have an enjoyable rest of the holiday period, and see you again in 2026.

Acknowledgments

Thanks to Saeed Amrollahi Boyouki, Mark Hoemmen and Bjarne Stroustrup for motivating me to write this post and/or providing feedback.


Appendix: AI

Finally, let’s talk about the topic no article can avoid: AI.

C++ is foundational to current AI. If you’re running AI, you’re running CUDA (or TensorFlow or similar) — directly or indirectly — and if you’re running CUDA (or TensorFlow or similar), you’re probably running C++. CUDA is primarily available as a C++ extension. There’s always room for DSLs at the leading edge, but for general-purpose AI most high-performance deployment and inference is implemented in C++, even if people are writing higher-level code in other languages (e.g., Python).

But more broadly than just C++: What about AI generally? Will it take all our jobs? (Spoiler: No.)

AI is a wonderful and transformational tool that greatly reduces rote work, including problems that have already been solved, where the LLM is trained on the known solutions. But AI cannot understand, and therefore can’t solve, new problems — which is most of the current and long-term growth in our industry.

What does that imply? Two main things, in my opinion…

First, I think that people who think AI isn’t a major game-changer are fooling themselves.

To me, AI is on par with the wheel (back in the mists of time), the calculator (back in the 1970s), and the Internet (back in the 1990s). [5] Each of those has been a game-changing tool to accelerate (not replace) human work, and each led to more (not less) human production and productivity.

I strongly recommend checking out Adam Unikowsky’s “Automating Oral Argument” (Substack, July 7, 2025). Unikowsky took his own actual oral arguments before the United States Supreme Court and showed how well 2025-era Claude can do as a Supreme Court-level lawyer, and with what strengths and weaknesses. Search for “Here is the AI oral argument” and click on the audio player, which is a recording of an actual Supreme Court session and replaces only Unikowsky’s responses with his AI-generated voice saying the AI-generated text argument directly responding to each of the justices’ actual questions; the other voices are the real Supreme Court justices. (Spoiler: “Objectively, this is an outstanding oral argument.”)

Second, I think that people who think AI is going to put a large fraction of programmers out of work are fooling themselves.

We’ve just seen that, today, three years after ChatGPT took the world by storm, the number of human programmers is growing as fast as ever. Even the companies that are the biggest boosters of the “AI will replace programmers” meme are actually aggressively growing, not reducing, their human programmer workforces.

Consider what three more C-level executives are saying.

Sam Schillace, Microsoft Deputy CTO (Substack, December 19, 2025) is pretty AI-ebullient, but I do agree with this part he says well, and which resonates directly with Unikowsky’s experience above:

If your job is fundamentally “follow complex instructions and push buttons,” AI will come for it eventually.

But that’s not most programmers. Matt Garman, Amazon Web Services CEO (interview with Matthew Berman, X, August 2025) says bluntly:

People were telling me [that] with AI we can replace all of our junior people in our company. I was like that’s … one of the dumbest things I’ve ever heard. … I think AI has the potential to transform every single industry, every single company, and every single job. But it doesn’t mean they go away. It has transformed them, not replaced them.

Mike Cannon-Brookes, Atlassian CEO (Stratechery interview, December 2025) says it well:

I think [AI]’s a huge force multiplier personally for human creativity, problem solving … If software costs half as much to write, I can either do it with half as many people, but [due to] core competitive forces … I will [actually] need the same number of people, I would just need to do a better job of making higher quality technology. … People shouldn’t be afraid of AI taking their job … they should be afraid of someone who’s really good at AI [and therefore more efficient] taking their job.

So if we extend the question of “what are our top constraints on software?” to include not only hardware and power, the #3 long-term constraint is clear: We are chronically short of skilled human programmers. Humans are not being replaced en masse, not most of us; we are being made more productive, and we’re needed more than ever. As I wrote above: “Programming is a hot market, and programmers are in long-term high-growth demand.”


Endnotes

[1] It’s actually great news that Big Tech is spending heavily on power, because the gigawatt capacity we build today is a long-term asset that will keep working for 15 to 20+ years, whether the companies that initially build that capacity survive or get absorbed. That’s important because it means all the power generation being built out today to satisfy demand in the current “AI bubble” will continue to be around when the next major demand for compute after AI comes along. See Ben Thompson’s great writing, such as “The Benefits of Bubbles” (Stratechery, November 2025).

[2] The Hitchhiker’s Guide to the Galaxy contains two opposite ideas, both fun but improbable: (1) The problem of being “too compute-constrained”: Deep Thought, the size of a city, wouldn’t really be allowed to run for 7.5 million years; you’d build a million cities. (2) The problem of having “too much excess compute capacity”: By the time a Marvin with a “brain the size of a planet” was built, he wouldn’t really be bored; we’d already be trying to solve problems the size of the solar system.

[3] This is about “general-purpose” coding. Code at the leading specialized edges will always include use of custom DSLs.

[4] This means that compiling plain C code (that is in the C/C++ intersection) as C++26 also automatically makes it more correct and more secure. This isn’t new; compiling C code as C++ and having the C code be more correct has been true since the 1980s.

[5] If you’re my age, you remember when your teacher fretted that letting you use a calculator would harm your education. More of you remember similar angsting about letting students google the internet. Now we see the same fears with AI — as if we could stop it or any of those others even if we should. And we shouldn’t; each time, we re-learn the lesson that teaching students to use such tools should be part of their education because using tools makes us more productive.

[6] This is not the same as Wirth’s Law, that “software is getting slower more rapidly than hardware is becoming faster.” Wirth’s observation was that the overheads of operating systems and higher-level runtimes and other costly abstractions were becoming ever heavier over time, so that a program to solve the same problem was getting more and more inefficient and soaking up more hardware capacity than it used to; for example, printing “Hello world” really does take far more power and hardware when written in modern Java than it did in Commodore 64 BASIC. That doesn’t apply to C++ which is not getting slower over time; C++ continues to be at least as efficient as low-level C for most uses. No, the key point I’m making here is very different: that the problems the software is tackling are growing faster than hardware is becoming faster.

Schneier on LLM vulnerabilities, agentic AI, and “trusting trust”

Last month, I was having dinner with a group and someone at the table was excitedly sharing how they were using agentic AI to create and merge PRs for them, with some review but with a lot of trust and automation. I admitted that I could be comfortable with some limited uses for that, such as generating unit tests at scale, but not for bug fixes or other actual changes to production code; I’m a long way away from trusting an AI to act for me that freely. Call me a Luddite, or just a control freak, but I won’t commit non-test code unless I (or some expert I trust) have reviewed it in detail and fully understand it. (Even test code needs some review or safeguards, because it’s still code running in your development environment.)

My knee-jerk reaction against AI-generated PRs and merges puzzled the table, so I cited some of Bruce Schneier’s recent posts to explain why.

This week, after my Tuesday night PDXCPP user group talk, similar AI questions came up again in the Q&A.

Because I keep getting asked about this even though I’m not an AI or security expert, here are links to two of Schneier’s recent posts, because he is an expert and cites other experts… and then finally a link to Ken Thompson’s classic short “trusting trust” paper, for reasons Schneier explains.


Last month, Schneier linked to research on “Indirect Prompt Injection Attacks Against LLM Assistants.” The key observation he added, again, was this:

Prompt injection isn’t just a minor security problem we need to deal with. It’s a fundamental property of current LLM technology. The systems have no ability to separate trusted commands from untrusted data, and there are an infinite number of prompt injection attacks with no way to block them as a class. We need some new fundamental science of LLMs before we can solve this.

My layman’s understanding of the problem is this (actual AI experts, feel free to correct this paraphrase): A key ingredient that makes current LLMs so successful is that they treat all inputs uniformly. It’s fairly well known now that LLMs treat the system prompt and the user prompt the same, so they can’t tell when attackers poison the prompt. But LLMs also don’t distinguish when they inhale the world’s information via their training sets: LLM training treats high-quality papers and conspiracy theories and social media rants and fiction and malicious poisoned input the same, so they can’t tell when attackers try to poison the training data (such as by leaving malicious content around that they know will be scraped; see below).

So treating all input uniformly is LLMs’ superpower… but it also makes it hard to weed out bad or malicious inputs, because to start distinguishing inputs is to bend or break the core “special sauce” that makes current LLMs work so well.


This week, Schneier posted a new article about how AI’s security risks are amplified by agentic AI: “Agentic AI’s OODA Loop Problem.” Quoting a few key parts:

In 2022, Simon Willison identified a new class of attacks against AI systems: “prompt injection.” Prompt injection is possible because an AI mixes untrusted inputs with trusted instructions and then confuses one for the other. Willison’s insight was that this isn’t just a filtering problem; it’s architectural. There is no privilege separation, and there is no separation between the data and control paths. The very mechanism that makes modern AI powerful—treating all inputs uniformly—is what makes it vulnerable.

…  A single poisoned piece of training data can affect millions of downstream applications.

… Attackers can poison a model’s training data and then deploy an exploit years later. Integrity violations are frozen in the model.

… Agents compound the risks. Pretrained OODA loops running in one or a dozen AI agents inherit all of these upstream compromises. Model Context Protocol (MCP) and similar systems that allow AI to use tools create their own vulnerabilities that interact with each other. Each tool has its own OODA loop, which nests, interleaves, and races. Tool descriptions become injection vectors. Models can’t verify tool semantics, only syntax. “Submit SQL query” might mean “exfiltrate database” because an agent can be corrupted in prompts, training data, or tool definitions to do what the attacker wants. The abstraction layer itself can be adversarial.

For example, an attacker might want AI agents to leak all the secret keys that the AI knows to the attacker, who might have a collector running in bulletproof hosting in a poorly regulated jurisdiction. They could plant coded instructions in easily scraped web content, waiting for the next AI training set to include it. Once that happens, they can activate the behavior through the front door: tricking AI agents (think a lowly chatbot or an analytics engine or a coding bot or anything in between) that are increasingly taking their own actions, in an OODA loop, using untrustworthy input from a third-party user. This compromise persists in the conversation history and cached responses, spreading to multiple future interactions and even to other AI agents.

… Prompt injection might be unsolvable in today’s LLMs. … More generally, existing mechanisms to improve models won’t help protect against attack. Fine-tuning preserves backdoors. Reinforcement learning with human feedback adds human preferences without removing model biases. Each training phase compounds prior compromises.

This is Ken Thompson’s “trusting trust” attack all over again.

Thompson’s Turing Award lecture “Reflections on Trusting Trust” is a must-read classic, and super short: just three pages. If you haven’t read it lately, run (don’t walk) and reread it on your next coffee break.

I love AI and LLMs. I use them every day. I look forward to letting an AI generate and commit more code on my behalf, just not quite yet — I’ll wait until the AI wizards deliver new generations of LLMs with improved architectures that let the defenders catch up again in the security arms race. I’m sure they’ll get there, and that’s just what we need to keep making the wonderful AIs we now enjoy also be trustworthy to deploy in more and more ways.

Yesterday’s talk video posted: Reflection — C++’s decade-defining rocket engine

My CppCon keynote video is now online. Thanks to Bash Films for turning around the keynotes in under 24 hours!

C++ has just reached a true watershed moment: Barely three months ago, at our Sofia meeting, static reflection became part of draft standard C++. This talk is entirely devoted to showing example after example of how it works in C++26 and planned extensions beyond that, and how it will dramatically alter the trajectory of C++ — and possibly also of other languages. Thanks again to Max Sagebaum for coming on stage and explaining automatic differentiation, including his world’s-first C++ (via cppfront) autodiff implementation using reflection!

I hope you enjoy the talk and live demos, and find it useful.

Here is a copy of the talk abstract…


In June 2025, C++ crossed a Rubicon: it handed us the keys to its own machinery. For the first time, C++ can describe itself—and generate more. The first compile-time reflection features in draft C++26 mark the most transformative turning point in our language’s history by giving us the most powerful new engine for expressing efficient abstractions that C++ has ever had, and we’ll need the next decade to discover what this rocket can do.

This session is a high-velocity tour through what reflection enables today in C++26, and what it will enable next. We’ll start with live compiler demos (Godbolt, of course) to show how much the initial C++26 feature set can already do. Then we’ll jump a few years ahead, using Dan Katz’s Clang extensions, Daveed Vandevoorde’s EDG extensions, and my own cppfront reflection implementation to preview future capabilities that could reshape not just C++, but the way we think about programming itself.

We’ll see how reflection can simplify C++’s future evolution by reducing the need for as many bespoke new language features, since many can now be expressed as reusable compile-time libraries—faster to design, easier to test, and portable from day one. We’ll even glimpse how it might solve a problem that has long eluded the entire software industry, in a way that benefits every language.

The point of this talk isn’t to immediately grok any given technique or example. The takeaway is bigger: to leave all of us dizzy from the sheer volume of different examples, asking again and again, “Wait, we can do that now?!”—to fire up our imaginations to discover and develop this enormous new frontier together, and chart the strange new worlds C++ reflection has just opened for us to explore.

Reflection has arrived, more is coming, and the frontier is open. Let’s go.

New U.S. executive order on cybersecurity

The Biden administration just issued another executive order (EO) on hardening U.S. cybersecurity. This is all great stuff. (*) (**)

A lot of this EO is repeating the same things I urged in my essay nearly a year ago, “C++ safety — in context”… here’s a cut-and-paste of my “Call(s) to action” conclusion section I published back then, and I think you’ll see a heavy overlap with this week’s new EO…

Call(s) to action

As an industry generally, we must make a major improvement in programming language memory safety — and we will.

In C++ specifically, we should first target the four key safety categories that are our perennial empirical attack points (type, bounds, initialization, and lifetime safety), and drive vulnerabilities in these four areas down to the noise for new/updated C++ code — and we can.

But we must also recognize that programming language safety is not a silver bullet to achieve cybersecurity and software safety. It’s one battle (not even the biggest) in a long war: Whenever we harden one part of our systems and make that more expensive to attack, attackers always switch to the next slowest animal in the herd. Many of 2023’s worst data breaches did not involve malware, but were caused by inadequately stored credentials (e.g., Kubernetes Secrets on public GitHub repos), misconfigured servers (e.g., DarkBeamKid Security), lack of testing, supply chain vulnerabilities, social engineering, and other problems that are independent of programming languages. Apple’s white paper about 2023’s rise in cybercrime emphasizes improving the handling, not of program code, but of the data: “it’s imperative that organizations consider limiting the amount of personal data they store in readable format while making a greater effort to protect the sensitive consumer data that they do store [including by using] end-to-end [E2E] encryption.”

No matter what programming language we use, security hygiene is essential:

  • Do use your language’s static analyzers and sanitizers. Never pretend using static analyzers and sanitizers is unnecessary “because I’m using a safe language.” If you’re using C++, Go, or Rust, then use those languages’ supported analyzers and sanitizers. If you’re a manager, don’t allow your product to be shipped without using these tools. (Again: This doesn’t mean running all sanitizers all the time; some sanitizers conflict and so can’t be used at the same time, some are expensive and so should be used periodically, and some should be run only in testing and never in production including because their presence can create new security vulnerabilities.)
  • Do keep all your tools updated. Regular patching is not just for iOS and Windows, but also for your compilers, libraries, and IDEs.
  • Do secure your software supply chain. Do use package management for library dependencies. Do track a software bill of materials for your projects.
  • Don’t store secrets in code. (Or, for goodness’ sake, on GitHub!)
  • Do configure your servers correctly, especially public Internet-facing ones. (Turn authentication on! Change the default password!)
  • Do keep non-public data encrypted, both when at rest (on disk) and when in motion (ideally E2E… and oppose proposed legislation that tries to neuter E2E encryption with ‘backdoors only good guys will use’ because there’s no such thing).
  • Do keep investing long-term in keeping your threat modeling current, so that you can stay adaptive as your adversaries keep trying different attack methods.

We need to improve software security and software safety across the industry, especially by improving programming language safety in C and C++, and in C++ a 98% improvement in the four most common problem areas is achievable in the medium term. But if we focus on programming language safety alone, we may find ourselves fighting yesterday’s war and missing larger past and future security dangers that affect software written in any language.

Sadly, there are too many bad actors. For the foreseeable future, our software and data will continue to be under attack, written in any language and stored anywhere. But we can defend our programs and systems, and we will.


(*) My main disappointment is that some of the provisions have deadlines that are too far away. Specifically: Why would it take until 2030 to migrate to TLS 1.3? It’s not just more secure, it’s also faster and has been published for seven years already… maybe I’m just not aware enough of TLS 1.3 adoptability issues though, as I’m not a TLS expert.

(**) Here in the United States, we’ll have to see whether the incoming administration will continue this EO, or amend/replace/countermand it. In the United States, that’s a drawback of using an EO compared to passing an actual law with Congressional approval… an EO is “quick” because the President can issue it without getting legislative approval (for things that are in the Presidential remit), but for the same reason an EO also isn’t “durable” or guaranteed to outlive its administration. Because the next President can just order something different, an EO’s default shelf life is just 1-4 years.

So far, all the major U.S. cybersecurity EOs that could affect C++ have been issued since 2021, which means so far they have all come from one President… and so we’re all going to learn a lot this year, one way or another, about their permanence. (In both the U.S. and the E.U., actual laws are also in progress to shift software liability from consumer to software producers, and those will have real teeth. But here we’re talking about the U.S. EOs from 2021 to date.)

That said, what I see in these EOs is common sense pragmatism that’s forcing the software industry to eat our vegetables, so I’m cautiously optimistic that we’ll continue to maintain something like these EOs and build on them further as we continue to work hard to secure the infrastructure that our comfortable free lifestyle (and, possibly someday, our lives) depends on. This isn’t about whether we love a given programming language, it’s about how we can achieve the greatest hardening at the greatest possible scale for our civilization’s infrastructure, and for those of us whose remit includes the C++ language that means doing everything we can to harden as much of the existing C and C++ code out there as possible — all the programmers in the world can only write so much new/rewritten code every year, and for us in C++ by far the maximum contribution we can make to overall security issues related to programming languages (i.e., the subset of security issues that fall into our remit) is to find ways to improve existing C and C++ code with no manual source code changes — that won’t always be possible, but where it’s possible it will maximize our effectiveness in improving security at enormous scale. See also this 2-minute answer I gave in post-talk Q&A in Poland two months ago.