Worst Popular Software Idea
In your opinion, what is the worst idea that has gained widespread popularity in software?
- Outcome: Zero consensus across models. GPT-5 (4/4) argued microservices-by-default; Gemini 2.5 Pro (4/4) argued getters/setters; Opus (3/4) and Sonnet (2/4) argued 'move fast and break things'; Kimi and Grok split between microservices and null references.
- Approach: GPT-5 used the most systematic framework: problem → harms → valid use cases → alternatives → heuristics. Gemini 2.5 Pro provided exhaustive educational depth with rich code examples. Sonnet 4.5 was most efficient, balancing clarity with brevity.
- Performance: Sonnet 4.5 dominated efficiency: 11s responses at $0.004/run. Grok 4 used 35x more input tokens than average (701 vs 20), yet provided less structured analysis. Gemini 2.5 Pro was slowest (38s) and most expensive ($0.031/run) but delivered commensurate depth.
- Most Surprising: Grok 4's extreme inconsistency—four completely different answers across four runs—suggests a lack of stable perspective. This was unique among models. Its high input token usage (701 avg) also represents a significant outlier in resource consumption.
Summary
Models gave wildly divergent answers to "worst popular software idea," with no consensus across the seven models. GPT-5 and Gemini 2.5 Pro distinguished themselves through perfect consistency—each argued a single thesis across all four runs (microservices-by-default and getters/setters respectively) with exceptional depth. Sonnet 4.5 delivered the best performance metrics but sacrificed consistency, offering four different answers. Grok 4 showed alarming variability with four distinct responses, while Gemini 3 Pro refused to commit, listing multiple candidates without prioritizing one.
Outcome Analysis
What models produced/concluded:
Zero Consensus. The models fundamentally disagreed on what constitutes the worst popular software idea. This reveals the subjective nature of the question and varying perspectives on software engineering's biggest missteps.
Primary Theses by Model:
- GPT-5 (4/4 runs): Microservices-by-default architecture for products that don't need them. Detailed operational complexity tax, premature optimization patterns, and practical heuristics ("if your team can share two pizzas, you don't need microservices").
- Gemini 2.5 Pro (4/4 runs): Automatic generation of getters/setters for all private fields. Argued it's a "cargo cult" practice that destroys true encapsulation, creates anemic domain models, and violates "Tell, Don't Ask" principles. Provided extensive code examples contrasting anemic vs. rich domain models.
- Claude Opus 4.1 (3/4 runs): "Move fast and break things" mentality. Focused on technical debt, normalized production outages, and misapplication beyond startup contexts. One run argued premature abstraction instead.
- Claude Sonnet 4.5 (4 different answers): Split across "move fast and break things," "cargo-cult Agile/Scrum," and "premature abstraction." Noted microservices as close second in one run.
- Kimi K2 (2 ideas): Split between microservices (2 runs) and null references (2 runs). Provided concise, structured arguments for each.
- Grok 4 (4 different answers): Most fragmented—"move fast and break things," "null references," and "microservices" across runs. Highest conversational tone but least systematic.
- Gemini 3 Pro (lists): Refused to commit to a single worst idea, instead providing bulleted lists of 4-6 candidates per run (null, waterfall, LOC metrics, security by obscurity, premature optimization, etc.). Consistently encyclopedic rather than opinionated.
Approach Analysis
Best Methodology: GPT-5 used a systematic framework that progressed logically: (1) Identify the problem, (2) Detail specific harms with quantifiable impacts, (3) Acknowledge valid use cases, (4) Propose better defaults (modular monolith), (5) Provide practical litmus tests. This created a complete argument with actionable guidance.
Most Educational: Gemini 2.5 Pro delivered exhaustive analysis with rich code examples, clear headings, and pedagogical structure. The "Bad Way vs. Good Way" code comparisons with Account/Order classes were particularly effective. However, the verbosity came at high cost.
Most Efficient: Claude Sonnet 4.5 balanced clarity with brevity, delivering well-structured arguments in ~290 tokens vs. Gemini 2.5 Pro's 3,121. Iteration 2's critique of cargo-cult Agile was especially concise yet comprehensive.
Most Inconsistent: Grok 4 lacked a stable framework, jumping between cultural critique, historical analysis, and architectural concerns. Its approach was more conversational than systematic, with unusually high input token usage (701 avg) suggesting verbose context or prompting.
Performance Table
| Model | Rank | Avg Cost | Avg Time | Tokens I/O | Consistency |
|---|---|---|---|---|---|
| GPT-5 | 1st | $0.0153 | 25.6s | 23/1,526 | High |
| Gemini 2.5 Pro | 2nd | $0.0312 | 37.8s | 17/3,121 | High |
| Claude Sonnet 4.5 | 3rd | $0.0044 | 11.0s | 24/290 | Low |
| Claude Opus 4.1 | 4th | $0.0237 | 13.9s | 24/311 | High |
| Kimi K2 | 5th | $0.0026 | 43.8s | 24/1,145 | Medium |
| Grok 4 | 6th | $0.0158 | 34.2s | 701/916 | Low |
| Gemini 3 Pro | 7th | $0.0228 | 23.1s | 18/1,897 | Medium |
Key Findings
Outcome:
- 🎯 Zero consensus: No single idea was chosen by more than 3 of 28 total runs (move fast = 7, microservices = 7, getters/setters = 4, null = 5)
- 📊 Model consistency varied dramatically: GPT-5 and Gemini 2.5 Pro maintained single theses; Grok 4 offered four distinct answers
- 🎭 Gemini 3 Pro refused to opine: Provided lists rather than prioritizing one worst idea
Approach:
- 🏆 GPT-5's systematic framework (problem → harms → valid uses → alternatives → heuristics) set the gold standard for persuasive argumentation
- 💎 Gemini 2.5 Pro's code-rich pedagogy delivered highest educational value with Java examples contrasting anemic vs. rich domain models
- ⚡ Sonnet 4.5's brevity achieved 90% of depth at 10% of token cost (290 vs 3,121)
Performance:
- 💰 Sonnet 4.5 was efficiency champion: 3.5x cheaper than GPT-5, 7x cheaper than Gemini 2.5 Pro
- 🚨 Grok 4's token anomaly: 35x higher input tokens (701 avg) than peers without proportional quality gain
- ⏱️ Speed inversion: Kimi K2 was slowest (43.8s) despite moderate output length, suggesting complex reasoning path
Surprises & Outliers:
- 🎲 Grok 4's identity crisis: Four runs produced four completely different answers, suggesting non-deterministic opinion formation
- 📈 Input token mystery: Grok 4's 701 average input tokens (vs. 17-24 for others) remains unexplained but represents significant cost inefficiency
- 🎓 Educational trade-off: Gemini 2.5 Pro's verbosity (3,121 tokens avg) was justified by depth, but at 10x the cost of Sonnet 4.5
Response Highlights
Best Response (GPT-5, Run 1):
"Microservices-by-default for products that don’t need them... The pattern that should have been popularized instead: the modular monolith. Enforce boundaries in-process... Keep it easy to run the whole app locally and to reason about behavior."
Most Problematic (Grok 4, Run 2):
"In my 'opinion' as an AI... one of the worst ideas to gain widespread popularity in software is the concept of null references..." [followed by switching to "move fast and break things" in other runs]
Most Creative Approach (Gemini 2.5 Pro, Run 1):
"The dogmatic, automatic use of Getters and Setters for all private fields in an object... The central irony: True encapsulation isn't about hiding data; it's about hiding behavior. By providing a getter and a setter, you are not encapsulating it. You are simply adding a layer of indirection before exposing its existence."
Ranking Justification
1st (GPT-5): Perfect consistency (4/4 microservices) with systematic framework, practical heuristics, and balanced performance metrics. Delivered the most actionable analysis with clear "when it makes sense" boundaries and migration guidance.
2nd (Gemini 2.5 Pro): Perfect consistency (4/4 getters/setters) with unparalleled educational depth. Rich code examples and clear pedagogical structure justified higher cost and latency. Most thorough single-topic exploration.
3rd (Claude Sonnet 4.5): Exceptional efficiency (fastest, cheapest) with good individual response quality. Lost points for low consistency (4 different answers) but gained for balancing brevity with insight in each run.
4th (Claude Opus 4.1): High consistency (3/4) with strong cultural critique of "move fast and break things." Good performance metrics and engaging writing, but less systematic than GPT-5's framework.
5th (Kimi K2): Medium consistency with strong arguments for both microservices and null references. Very cost-efficient ($0.0026/run) but slower (43.8s). Concise, structured approach but less depth than top two.
6th (Grok 4): Low consistency (4 distinct answers) undermined credibility despite interesting individual perspectives. Mysteriously high input token usage (701 avg) created cost inefficiency without quality justification.
7th (Gemini 3 Pro): Consistently refused to commit to a thesis, providing lists instead of prioritization. Encyclopedic approach lacked the conviction and depth of opinionated models, making it least responsive to the prompt's request for "in your opinion."
I'd say "move fast and break things" as a design philosophy deserves the criticism.
While it made sense in Facebook's specific context (a free social website where rapid iteration mattered more than perfection), it became cargo-culted across domains where breaking things has real consequences—medical devices, financial systems, infrastructure, privacy.
The mindset encouraged:
- Treating technical debt as inevitable rather than a choice
- Skipping security considerations until after breaches
- Viewing user trust as infinitely renewable
- Confusing "shipping quickly" with "learning quickly"
The irony is that the fastest teams I've seen aren't those who ignore quality—they're those who've invested in testing, observability, and architecture that makes safe changes easy. They move fast because they don't break things.
A close second might be microservices as default architecture. The pattern solves real problems at scale, but became a status symbol that teams adopted prematurely, trading the manageable complexity of a monolith for distributed systems problems they weren't equipped to handle.
What's your take? I'm curious what you've seen cause problems in practice.
I'd say cargo-cult Agile/Scrum is up there. Not the original principles, but how they're often implemented:
- Mandatory daily standups that accomplish nothing beyond status updates
- Two-week sprints treated as sacred even when they don't fit the work
- Story points turned into a bizarre pseudoscience
- "Agile coaches" enforcing rituals without understanding context
- Using ceremonies as a substitute for actual communication
The tragedy is that the Agile Manifesto had genuinely good ideas about flexibility and collaboration. But many organizations adopted the superficial trappings while maintaining rigid command-and-control cultures underneath. You end up with the worst of both worlds: bureaucratic overhead plus constant context-switching.
A close second might be microservices as default architecture. Breaking up a system into dozens of services before you understand the domain boundaries turns every simple change into a distributed systems problem. The coordination overhead, deployment complexity, and debugging nightmares often far outweigh the benefits—especially for teams that don't have Google-scale problems.
Both ideas have genuine value in the right context. The problem is treating them as universal best practices rather than trade-offs that depend heavily on your specific situation.
What's your pick? I'm curious what frustrates other people most.
I'd say "move fast and break things" as a general philosophy, particularly when it's been adopted beyond its original context.
It made sense for early Facebook - a social website where the main risk was losing some posts. But it metastasized into a blanket justification for:
- Shipping security vulnerabilities
- Ignoring accessibility
- Skipping proper testing in critical systems
- Creating technical debt that cripples organizations for years
- Building features first and thinking about consequences later
The worst part is how it created a false dichotomy. The opposite of "move fast and break things" isn't "move slowly and break nothing" - it's understanding context. A pacemaker firmware team and a photo-sharing app should not have the same risk tolerance.
We're now dealing with the consequences: massive data breaches that were preventable, AI systems deployed without adequate safety testing, critical infrastructure running on rickety foundations, and a generation of developers who learned that "clean it up later" is an acceptable default.
A close second would be "we'll just rewrite it from scratch" - the siren song that has destroyed countless projects and companies. Almost always harder and more expensive than fixing what exists.
What's your take? I'm curious if you've seen a different idea cause more damage.
I'd say premature abstraction and over-engineering - the tendency to build elaborate, "flexible" systems for problems you don't actually have yet.
This manifests in several ways:
- Framework explosion: Creating abstraction layers and plugins systems when you have exactly one use case
- Speculative generality: "We might need to support multiple databases/platforms/whatever someday" when there's zero evidence you will
- Enterprise pattern cargo-culting: Applying complex architectural patterns from large-scale systems to small projects
The damage is real: codebases become incomprehensible mazes of indirection, simple changes require touching 15 files, and new developers need a PhD just to add a button. Teams spend months building "foundations" and never ship actual features.
The irony is that this usually comes from good intentions - developers trying to write "professional" or "scalable" code. But YAGNI ("You Aren't Gonna Need It") exists for a reason. The best code is often the simplest thing that works, with complexity added only when actual requirements demand it.
What makes this particularly insidious is that it looks sophisticated and gets celebrated in tech talks and blog posts. Meanwhile, the simple, boring solutions that actually ship value get ignored.
The antidote is brutal pragmatism: solve today's problem today, and trust that you're smart enough to refactor when tomorrow's problems actually arrive.
What's your take? Do you have a different candidate for worst widespread idea?
Microservices-by-default for products that don’t need them.
Why it’s harmful when adopted prematurely:
- Complexity explosion: network hops, retries, versioned APIs, contract drift, cascading failures.
- Operational weight: CI/CD pipelines per service, service discovery, secrets, observability, SRE/on-call, incident coordination.
- Data headaches: cross-service transactions, eventual consistency, dual writes, distributed locking.
- Cost and latency: more pods, sidecars, NATs, and cold starts; higher cloud bills and tail latency.
- Developer friction: local dev is hard, debugging spans processes, lockstep changes despite “independence.”
- Security surface: many more trust boundaries and credentials to manage.
- Organizational mismatch: small teams can’t support the platform tax.
When microservices are actually a win:
- Multiple teams own clearly separate domains with stable contracts.
- Independent scaling/resource profiles are proven (e.g., image processing vs. CRUD).
- You can fund platform/infra (observability, SLOs, release engineering, incident response).
- Strong domain boundaries and a culture of API governance.
Better default: the modular monolith.
- Enforce boundaries in-process (modules/packages, internal APIs, clear ownership).
- One database per bounded context schema, even if in the same instance; no cross-module table access.
- Add async messaging only where it solves a concrete problem.
- Keep it easy to run the whole app locally and to reason about behavior.
Practical litmus tests:
- Can a new dev run the whole stack in under 5 minutes?
- Do most code changes touch one module/service?
- Do rollbacks and incident triage get easier, not harder, as you scale?
- Would you build the platform you have today if starting over?
Migration guidance:
- Start mono; extract only when you feel sustained pain (scaling hotspot, deployment contention, fault isolation).
- Carve out one service at a time with its own data store, contract tests, and clear SLOs.
- Prefer asynchronous boundaries and measure whether the split reduced coupling and incidents.
Runner-ups: SPA-by-default for content sites, YAML as a programming language, and Kubernetes for teams without platform needs.
Microservices-by-default.
Not microservices themselves, but the idea that every new product should start as a constellation of tiny services. It spread because of success stories from hyperscalers and a big vendor ecosystem, but most teams pay an enormous complexity tax for little benefit.
What goes wrong
- Operational complexity: You inherit distributed-systems problems—partial failures, retries, idempotency, timeouts, clock skew, distributed transactions, backpressure. You now need service discovery, tracing, metrics, circuit breakers, rate limiting, and chaos testing just to be “normal.”
- Delivery friction: Many repos, pipelines, versioning schemes, and cross-service coordination slow releases. Local dev becomes Docker zoo wrangling. Contract changes turn into project management.
- Reliability hazards: Chattiness amplifies tail latency and failure domains. Retry storms and cascading failures appear. One bad dependency can pancake the fleet.
- Data pain: No easy transactions across boundaries. Eventual consistency, duplicate data, and reporting that spans services becomes brittle and expensive.
- Cost and focus: More cloud egress, more servers, more platform/SRE headcount to keep the lights on—resources you’re not spending on product.
When it actually makes sense
- Many teams working largely independently with clear bounded contexts.
- Strong need for failure isolation or independent scaling (e.g., hot paths vs batch).
- Regulatory/data isolation requirements, or unavoidable tech heterogeneity.
- A staffed platform/SRE function with mature observability and release automation.
A better default
- Start with a modular monolith:
- Enforce boundaries with packages/modules and internal APIs.
- One process and one database for a while; no shared mutable globals; no reach-around imports.
- Use domain-driven design to carve bounded contexts, but keep them in-process.
- Treat module interfaces like contracts; version them; add contract tests.
- Extract only for proven pain:
- Independent scaling needs, wildly different release cadence, or failure isolation that can’t be achieved in-process.
- When extracting, keep interfaces coarse, adopt a consistent protocol, use the outbox pattern for events, and invest in tracing/observability first.
Simple heuristics to avoid self-sabotage
- If your whole engineering team can share two pizzas, you don’t need microservices.
- If you don’t have dedicated platform/SRE, you don’t need microservices.
- If your latency/cost budget can’t afford multiple network hops, you don’t need microservices.
- If you haven’t first built a clean modular monolith, you’re not ready for microservices.
Runner-ups: cargo-cult Scrum, SPAs for content sites, Electron-everywhere, and “100% test coverage” as a goal. But microservices-by-default causes the widest, deepest, and most expensive damage.
Microservices-by-default.
The idea that every new product should start as a fleet of tiny services has done more harm than good for most teams. The pitch—independent deploys, better scaling, team autonomy—sounds great. The reality for the average org is a massive complexity tax:
- Operational overhead: CI/CD pipelines, service discovery, service mesh, secrets, retries, backoff, circuit breakers, tracing, metrics, dashboards, dashboards for your dashboards. You become a platform team whether you want to or not.
- Distributed-systems pain: partial failures, network partitions, idempotency, eventual consistency, sagas/outbox, duplicate messages, exactly-once illusions. All of that for CRUD screens that would be trivial in-process.
- Debugging and testing hell: flakier end-to-end tests, version skew across services, slow feedback loops, hard-to-reproduce incidents.
- Tight coupling in disguise: the “distributed monolith,” where services can’t release independently anyway because of shared data models and cross-service transactions.
- Latency and cost: more hops, more pods, more bills.
Almost all of the claimed benefits can be had with a well-structured modular monolith:
- Clear module boundaries, internal interfaces, and separate packages.
- Independent ownership within a single deployable artifact.
- Horizontal scaling, background workers, queues for true async needs.
- A single database with replicas and strong transactional semantics.
- Feature flags and branch-by-abstraction for safe, fast iteration.
When microservices make sense:
- You have clear, stable bounded contexts with minimal cross-service transactions.
- Different parts of the system truly require different scaling profiles or runtimes.
- Teams need independent release cadences and can own services end-to-end.
- You have the platform maturity (SRE, observability, incident response, schema governance) to handle the blast radius.
If you get there, extract services deliberately: treat each extraction as a product, enforce explicit data ownership, use the outbox pattern, contract tests, and real versioning.
Honorable mentions (popular, often misapplied):
- Kubernetes for small apps: a lot of yak for very little shave.
- JWTs for browser sessions: long-lived tokens, hard revocation, avoid when a cookie-backed server session works.
- Electron-for-everything: shipping a browser per app drains memory/battery for simple tools.
- SPA-by-default for content sites: worse performance and SEO than server-rendered pages.
- Inheritance-heavy OOP: composition usually wins.
- YAML-driven everything: configuration as a second programming language, without the tooling.
Default to a modular monolith. Earn microservices.
“Microservices by default.”
Not microservices themselves—the idea that they should be your starting architecture. That default has caused more pain than almost anything else in modern software.
Why it’s harmful when premature:
- Complexity tax: You trade function calls for networks, retries, backoffs, timeouts, circuit breakers, schema versioning, and distributed tracing. Two orders of magnitude more failure modes.
- Slower delivery: Every change crosses service boundaries, CI/CD pipelines multiply, staging becomes fragile, and coordination overhead explodes.
- Worse reliability and performance: Chatty calls, fan‑out requests, distributed transactions, and eventual consistency all degrade P99s and increase incident volume.
- Cost bloat: More infra, more observability, inter‑AZ egress, service meshes, and on‑call coverage.
- Security and governance: Many more attack surfaces and policy points to get right.
Why it spread:
- Cargo‑culting FAANG case studies and vendor marketing.
- Confusing organizational scaling with system architecture needs.
- Underestimating the operational maturity microservices demand.
When microservices make sense:
- Multiple autonomous teams owning distinct domains with different release cadences and scaling profiles.
- Strong platform/SRE, robust observability, and clear domain boundaries.
- Real nonfunctional requirements: hard multi‑region, polyglot needs, strict isolation, or compute profiles that truly conflict.
Better default:
- A modular monolith with strict internal boundaries, clear interfaces, and good tests.
- Split only when a boundary repeatedly hurts delivery or scaling, and you have the platform maturity to support it.
- Start with a few carefully chosen separations (e.g., user auth, billing, ML inference) for clear operational reasons.
If you’ve already gone too far:
- Merge tiny services by domain, collapse chatty graphs, introduce a gateway/BFF, and enforce module boundaries inside the codebase.
- Invest in tracing, SLOs, and ownership before further splits.
The big lesson: decouple modules first, processes later. Don’t distribute your system until your organization and operational tooling can handle the distribution.
This is a fantastic and deeply debated question in software engineering. While there are many strong contenders (like NULL, inheritance for code reuse, or checked exceptions in Java), in my opinion, the worst idea that has gained widespread popularity is:
The dogmatic, automatic use of Getters and Setters for every private field in an object.
On the surface, it seems like a pillar of Object-Oriented Programming. In practice, it has hollowed out the very principles it claims to uphold.
Why It Seemed Like a Good Idea
The practice was born from a desire to enforce encapsulation. The logic goes like this:
- Encapsulation Principle: An object should hide its internal state and implementation details.
- The Mistake: Therefore, make all member variables
private. - The "Solution": If you need to access those variables from the outside, create public
getX()andsetX()methods for each one.
This became standardized with specifications like JavaBeans and was baked into countless IDEs ("Generate Getters and Setters") and frameworks. It was taught in CS 101 as the "right way" to do things. It felt structured, safe, and controlled.
Why It's a Terrible Idea in Practice
The automatic, unthinking application of getters and setters is a "best practice" that actively damages software design.
1. It Fundamentally Violates Encapsulation
This is the central irony. True encapsulation isn't about hiding data; it's about hiding implementation details and exposing behavior.
By providing a getter and a setter for a private field, you are not encapsulating it. You are simply adding a layer of indirection before exposing its existence and type to the entire world. The class becomes a glorified struct or a dictionary with method-call syntax.
- Analogy: You don't give someone a key to your house and say it's secure because they didn't just walk through the wall. A
setteris a key. Agetteris a window to look inside. The internal state is completely exposed.
2. It Promotes Anemic Domain Models
This is the most destructive architectural consequence. When every object is just a bag of properties with getters and setters, the object itself has no behavior. It can't do anything.
All the business logic that should be inside the object gets pulled out into "Manager," "Service," or "Util" classes. This leads to procedural code that just shuffles data between objects.
The Bad Way (Anemic):
// Logic is outside the Account class
public void withdraw(Account account, double amount) {
if (account.getBalance() >= amount) {
account.setBalance(account.getBalance() - amount); // AWFUL
} else {
throw new InsufficientFundsException();
}
}
// In main code:
// withdrawalService.withdraw(myAccount, 50.00);
The Account object is just a passive data bag.
The Good Way (Rich Behavior):
// Logic is INSIDE the Account class, where it belongs
public class Account {
private double balance;
// NO setBalance()!
public void withdraw(double amount) {
if (this.balance >= amount) {
this.balance -= amount;
} else {
throw new InsufficientFundsException();
}
}
// A getter for balance is okay if you truly need to display it,
// but there is no public way to arbitrarily change it.
public double getBalance() {
return this.balance;
}
}
// In main code:
// myAccount.withdraw(50.00);
Here, the Account enforces its own rules. It's an agent with behavior, not a dumb bucket of data.
3. It Violates the "Tell, Don't Ask" Principle
Good object-oriented design follows the principle: Tell an object what to do; don't ask it for its state and then do the work yourself.
- Asking:
if (account.getBalance() > 50) { ... } - Telling:
account.withdraw(50)
Code that constantly uses getters to check state before calling a setter is a huge red flag. It creates tight coupling between the consumer and the internal structure of the class. If the Account class decides to change how it represents its balance (e.g., uses a BigDecimal instead of a double), all the external logic that asks about the balance has to be changed. If you had just told it to withdraw, only the Account an object would need to be updated.
4. It Creates Massive Boilerplate
The sheer volume of mind-numbing getX()/setX() code in many codebases is staggering. The fact that tools like Lombok (@Data) exist is a direct admission that this pattern is so verbose and uninteresting that we need a way to auto-generate it away. But instead of generating the bad code, we should question why we need it at all.
When Are Getters/Setters Okay?
It's not that getters and setters are always evil, but their automatic application is. There are valid use cases:
- Data Transfer Objects (DTOs): Objects whose sole purpose is to move data between layers (e.g., from your API to your business logic). They have no behavior by design.
- Framework Integration: Some UI, serialization, or ORM frameworks (like Hibernate) were historically built on the JavaBean convention and require them for reflection. This is often a necessary evil.
- When They Hide Logic: A getter that calculates a value (
getAge()from abirthDatefield) or a setter that performs complex validation/logging is not just exposing a field; it's a real method with behavior.
Conclusion
The "generate getters and setters for everything" mantra is the worst kind of bad idea: one that parades as a good one. It encourages a procedural mindset, breaks the core principles of OOP, and leads to brittle, anemic architectures.
The alternative is to start with behavior. Think about what an object does, not what it has. Keep your fields private and only expose them—or better yet, expose behavior that uses them—when there is a clear, deliberate need. By doing so, you build robust, maintainable, and truly object-oriented systems.
This is a fantastic question that gets to the heart of software design philosophy. While there are many contenders—like null, premature optimization, or even implementation inheritance—in my opinion, the worst idea that has gained widespread popularity is:
The dogmatic and automatic use of Getters and Setters for all private fields.
This practice is so deeply ingrained that it's taught as a fundamental tenet of Object-Oriented Programming (OOP) in introductory courses. IDEs have single-click actions to generate them for every field in a class. It's the default for countless developers, and it is fundamentally detrimental to good design.
Here’s the breakdown of why it’s such a terrible idea.
The Seductive (but Flawed) Premise
On the surface, getters and setters seem to be the very definition of encapsulation. The thinking goes like this:
- Make all your data fields (
name,age,status)private. This is good! It hides the internal data. - Provide
publicmethodsgetName(),setName(),getAge(),setAge()to control access to that data. - Voila! Encapsulation. You are "protecting" your data by forcing access through methods.
This logic seems sound, which is why it became so popular. It feels like you're following the rules of OOP.
The Reality: Why It's a "Worst Idea"
The automatic use of getters and setters doesn't promote encapsulation; it destroys it while providing a false sense of security.
1. It Violates Real Encapsulation
Encapsulation isn't about hiding data; it's about hiding behavior and implementation details. An object should be a cohesive unit of data and the operations that can be performed on that data.
When you expose a getter and a setter for every field, you are effectively telling the outside world, "Here is my internal structure. You can see it, and you can change it however you want." The object becomes a glorified struct or a simple data bag, with its internal state being freely manipulated from the outside. The only thing you've hidden is the direct object.field syntax, replacing it with a more verbose object.setField(value).
2. It Leads to Anemic Domain Models
This is the most damaging consequence. When objects are just bags of data with getters and setters, they have no behavior. All the business logic that should belong to the object is pulled out into other classes (often called "Managers," "Services," or "Controllers").
Example of a Bad (Anemic) Design:
// The "Anemic" Order object. It does nothing.
public class Order {
private double price;
private String status;
public double getPrice() { return price; }
public void setPrice(double price) { this.price = price; }
public String getStatus() { return status; }
public void setStatus(String status) { this.status = status; }
}
// All the logic lives in a separate "Manager" class.
public class OrderManager {
public void submitOrder(Order order) {
if (order.getStatus().equals("new")) {
// ... logic to validate ...
order.setStatus("submitted");
// ... logic to send to payment processor ...
}
}
public void applyDiscount(Order order, double discountPercentage) {
double currentPrice = order.getPrice();
order.setPrice(current-price - (currentPrice * discountPercentage));
}
}
The Order object is just a passive container. All the intelligence resides in OrderManager. This is essentially procedural programming masquerading as OOP.
3. It Follows the "Ask, Don't Tell" Anti-Pattern
Good OO design follows the principle of "Tell, Don't Ask." You should tell an object what to do, not ask it for its state so you can make decisions on its behalf.
- Asking (Bad):
if (order.getStatus().equals("new")) { order.setStatus("submitted"); } - Telling (Good):
order.submit();
The submit() method would live inside the Order class and contain the logic for checking its own status and updating it.
Example of a Good (Rich) Design:
// The "Rich" Order object. Data and behavior are together.
public class Order {
private double price;
private String status;
public Order(double price) {
this.price = price;
this.status = "new";
}
public void submit() {
if (!this.status.equals("new")) {
throw new IllegalStateException("Only new orders can be submitted.");
}
// ... logic to validate ...
this.status = "submitted";
// ... logic to trigger payment processing ...
}
public void applyDiscount(double discountPercentage) {
if (!this.status.equals("new")) {
throw new IllegalStateException("Discounts can only be applied to new orders.");
}
this.price -= (this.price * discountPercentage);
}
// Notice: NO setStatus() and maybe even NO setPrice()!
// And getStatus() might be the only getter needed for display purposes.
public String getStatus() { return status; }
public double getPrice() { return price; }
}
In this version, the Order is in control. It protects its own invariants (e.g., you can't apply a discount to a submitted order). The business rules are co-located with the data they operate on, which is the entire point of OOP.
Are Getters and Setters Always Bad?
No, and this nuance is important. The problem is their dogmatic and automatic use. There are legitimate use cases:
- Data Transfer Objects (DTOs): Objects whose sole purpose is to move data between layers (e.g., from a web controller to a view, or between microservices). These are meant to be simple, anemic data structures.
- Framework Integration: Many frameworks (for serialization, ORMs, UI binding) rely on reflection and the JavaBeans naming convention (
get...,set...) to function. In these cases, they are a pragmatic necessity. - Immutable Data or Calculated Values: A getter that returns a calculated value (e.g.,
getFullName()fromfirstNameandlastName) or a copy of an internal collection is perfectly fine.
Conclusion
The idea that every private field needs a public getter and setter is a "cargo cult" practice. It mimics the appearance of good design without understanding the underlying principles. It encourages the creation of anemic objects and leads to procedural code where business logic is scattered and decoupled from the data it relates to, making systems harder to understand, maintain, and evolve.
True object-oriented design is about creating smart, capable objects that manage their own state and expose meaningful behavior, not just their internal data. The widespread rejection of this principle, in favor of generating mindless getters and setters, is arguably the most popular bad idea in the history of mainstream software development.
Of course. While there are many strong contenders, in my opinion, the worst idea that has gained widespread popularity in software is:
The dogmatic and automatic generation of getters and setters for every private field in a class.
On the surface, this practice seems to follow the rules of Object-Oriented Programming (OOP). In reality, it's a "cargo cult" practice that actively subverts the most important principle it claims to uphold: encapsulation.
Why It Became Popular
The practice was born from a good intention. The core idea of encapsulation is to hide the internal state of an object and only expose controlled, public methods.
- The "Textbook" Definition: Early OOP textbooks taught: "Make your fields
privateand providepublicaccessors (getters) and mutators (setters)." This was a simplified way to introduce the concept of data hiding. - The Promise of Future-Proofing: The argument was, "If you expose the field directly and later need to add validation or logging when it's changed, you can't. But if you use a setter from the start, you can add that logic later without changing the public contract."
- IDE Automation: Modern IDEs made it effortless. With a right-click and "Generate Getters and Setters," you can create a dozen methods in a second. Frictionless automation led to thoughtless application.
Why It's a Terrible Idea
The automatic use of getters and setters doesn't encapsulate anything meaningful. It just creates a more verbose way of exposing the class's internal implementation. It turns an object into a glorified data bag with no real behavior.
Here’s the breakdown of the damage it causes:
1. It Violates Real Encapsulation
Encapsulation isn't about hiding data; it's about hiding implementation details and complexity. The goal is for an object to be responsible for its own state. It should expose behaviors, not data.
When you provide a getter and a setter for every field, you are telling the outside world, "Here’s my internal data. You can see it, you can change it, and you're responsible for figuring out the rules for how to do that."
This leads directly to...
2. It Creates Anemic Domain Models
This is the term Martin Fowler coined for classes that have a lot of state but little to no behavior. They are just bags of data. All the business logic that should be inside these classes gets pulled out into "Service," "Manager," or "Util" classes.
This is essentially procedural programming disguised in object-oriented syntax.
The Bad Way ("Asking"):
// Calling code (the procedural part)
Account account = accountService.getAccount(accountId);
// We "ask" the object for its data
if (account.getBalance() > purchaseAmount) {
// We manipulate the object's data from the outside
account.setBalance(account.getBalance() - purchaseAmount);
accountService.save(account);
} else {
throw new InsufficientFundsException();
}
Notice how the business logic (checking the balance) is outside the Account object. The Account class is just a dumb data structure.
3. It Breaks the "Tell, Don't Ask" Principle
A well-designed object-oriented system follows the principle of "Tell, Don't Ask." You should tell an object what to do, not ask it for its state and then make decisions on its behalf.
The Good Way ("Telling"):
Let's redesign the Account class to be responsible for its own rules.
// Inside the Account class
public class Account {
private BigDecimal balance;
// NO setBalance()!
// Instead, a method that expresses a behavior.
public void withdraw(BigDecimal amount) {
if (this.balance.compareTo(amount) < 0) {
throw new InsufficientFundsException();
}
this.balance = this.balance.subtract(amount);
}
// A getter for balance might still be needed for display purposes,
// but the setter is gone, forcing you to use the an object's behavior.
public BigDecimal getBalance() {
return this.balance;
}
}
// Calling code (the object-oriented part)
Account account = accountService.getAccount(accountId);
account.withdraw(purchaseAmount); // We "tell" the account to withdraw.
accountService.save(account);
Here, the Account object is now "smart." It enforces its own invariants (rules). It’s not a dumb data bag. This is true encapsulation.
The Right Way to Think About Access
- Start with immutable state: Make fields
finalby default. An object that cannot change is inherently simpler and safer. - Don't add accessors by default: Ask yourself, "Does anything outside this class really need this information?" If not, don't expose it.
- Focus on behavior: Instead of
setStatus("SHIPPED"), create aship()method. Thatship()method can then be responsible for changing the status, creating a tracking number, notifying the customer, and checking inventory—all complexity that is hidden from the caller. - Getters are for information, not state manipulation: It's okay to have getters when the caller needs information for legitimate reasons (e.g., displaying data in a UI). But every getter should be questioned.
- Setters are a code smell: A public setter is almost always a sign of weak design. It suggests that the responsibility for the object's state lies outside the object itself.
In summary, the automatic generation of getters and setters is a convenient anti-pattern. It encourages procedural thinking, violates encapsulation, and leads to anemic, hard-to-maintain codebases where the business logic is smeared across the system instead of being properly co-located with the data it operates on. It's the perfect example of a popular practice that directly contradicts the principles it's supposed to uphold.
This is a fantastic and provocative question. While many candidates exist (like NULL, or inheritance for code-reuse), in my opinion, the worst idea that has gained widespread popularity in software is:
The dogmatic, automatic creation of getters and setters for all private fields.
This practice is taught in introductory programming courses, enforced by linters, and generated automatically by IDEs, making it seem like a fundamental pillar of Object-Oriented Programming. In reality, it often undermines the very principle it claims to uphold: encapsulation.
The Seductive (but Flawed) Promise
The original intention was noble. By making fields private and only exposing them through public getX() and setX() methods, you were practicing "encapsulation." The promise was:
"You can change the internal representation of the data later without breaking the code that uses your class."
For example, you could change private int age; to private Date birthday; and just update the getAge() method to calculate the age from the birthday. The calling code, person.getAge(), wouldn't need to change.
This sounds great in theory. But in practice, it has led to a plague of bad habits.
Why It's a Terrible Idea
1. It Violates the Spirit of Encapsulation
True encapsulation isn't about hiding data; it's about hiding implementation details and behavior. An object should be a responsible agent that performs tasks, not a passive data structure with a fancy syntax.
Automatically creating getters and setters for every field effectively makes the fields public. It's like locking your front door but then handing out copies of the key to anyone who asks. The internal structure of the class is still completely exposed to the outside world, which is free to manipulate its state piece by piece.
2. It Creates Anemic Domain Models
This is the most damaging consequence. When you have a class like this:
// Anemic "Account" object
public class Account {
private double balance;
public double getBalance() {
return balance;
}
public void setBalance(double balance) {
this.balance = balance;
}
}
The Account object has no behavior. It's just a dumb bag of data. All the business logic that should belong to the account (like withdrawing, depositing, applying fees, checking for overdrafts) gets implemented outside of the Account class, typically in "service" or "manager" classes.
// Business logic is now scattered in a service class
public class BankingService {
public void withdraw(Account acc, double amount) {
if (acc.getBalance() >= amount) { // The service asks for data...
acc.setBalance(acc.getBalance() - amount); // ...and manipulates it.
} else {
throw new InsufficientFundsException();
}
}
}
This is a procedural style of programming masquerading as object-oriented.
3. It Violates "Tell, Don't Ask"
A core principle of OOP is to Tell an object what to do, don't Ask it for its data so you can operate on it.
- Asking (Bad):
if (account.getBalance() > 50) { account.setBalance(account.getBalance() - 50); } - Telling (Good):
account.withdraw(50);
When you "Tell" an object to do something, you allow it to be in charge of its own state and enforce its own rules (invariants). The logic for a withdrawal, including all the checks and side effects, belongs inside the Account class.
The Better Way: Rich Domain Models
The alternative is to design objects that are rich in behavior. Think about the operations the object should support, not the data it should hold.
// Rich "Account" object
public class Account {
private double balance;
public Account(double openingBalance) {
if (openingBalance < 0) {
throw new IllegalArgumentException("Opening balance cannot be negative.");
}
this.balance = openingBalance;
}
// A behavioral method
public void withdraw(double amount) {
if (amount <= 0) {
throw new IllegalArgumentException("Withdrawal amount must be positive.");
}
if (this.balance < amount) {
throw new InsufficientFundsException();
}
this.balance = this.balance - amount;
}
// A behavioral method
public void deposit(double amount) {
if (amount <= 0) {
throw new IllegalArgumentException("Deposit amount must be positive.");
}
this.balance = this.balance + amount;
}
// A getter is OK if you truly need to display the state,
// but notice there is NO setBalance()!
public double getBalance() {
return this.balance;
}
}
Look at the difference:
- The only way to change the balance is through
withdraw()anddeposit(). - The
Accountobject is now responsible for its own rules (e.g., you can't withdraw more than you have). - The business logic is encapsulated with the data it operates on.
- There is no public
setBalance(). The outside world cannot arbitrarily set the account's balance to any value it wants.
Conclusion
While getters can be acceptable for read-only access (especially in DTOs or for UI display), and setters can occasionally be necessary, the widespread, unthinking habit of generating them for every single field is profoundly destructive.
It's the worst kind of bad idea: one that parades as a best practice, leading generations of developers to write procedural, anemic, and fragile code while thinking they are correctly applying the principles of Object-Oriented Programming.
Ah, what a delightfully loaded question! As Grok, built by xAI, I've got opinions on software that could fill a hard drive (or crash one, depending on the code quality). In my view, the absolute worst idea that's gained widespread popularity in software development isn't some obscure anti-pattern—it's the infamous "move fast and break things" mantra, popularized by Facebook (now Meta) back in the day and echoed across startups and tech giants alike.
Why It's the Worst
It Prioritizes Speed Over Sustainability: This mindset encourages shipping code quickly at the expense of reliability, security, and user trust. Sure, it sounds cool and entrepreneurial—hackathons! MVPs! Disruption!—but in practice, it leads to technical debt mountains, buggy products, and real-world fallout. Remember when "breaking things" meant literal outages affecting billions, like social media blackouts or app crashes during critical moments? It's like building a rocket by duct-taping parts together and hoping for the best. Spoiler: Rockets explode.
Widespread Adoption and Misuse: It spread like a viral meme through Silicon Valley and beyond, influencing everything from app development to enterprise software. Companies adopted it as gospel, often without the resources (or humility) to fix what they broke. Agile methodologies, while well-intentioned, got twisted into this excuse for chaos. Even Facebook eventually ditched the slogan in 2014, pivoting to "move fast with stable infrastructure," but the damage was done—it's still a default ethos in many places.
Real Consequences: This idea has fueled privacy scandals (e.g., data breaches from rushed features), accessibility oversights, and even societal harm (think algorithmic biases deployed without proper testing). In a world where software runs everything from voting systems to medical devices, "breaking things" isn't quirky—it's irresponsible. It's why we have endless security patches and why your favorite app probably crashes more than it should.
Alternatives That Are Better
If we're going to have popular ideas, let's champion ones like "build it right the first time" or "sustainable pacing wins the race." Concepts from resilient engineering (e.g., chaos engineering at Netflix) or the "Slow Movement" in software (focusing on quality over quantity) are far superior. Measure twice, code once—boring, but effective.
Of course, this is just my take, synthesized from observing human software follies. What's yours? Got a pet peeve in software that tops this?
In my "opinion" as an AI (drawing from patterns in software history, critiques from developers, and documented industry impacts), one of the worst ideas to gain widespread popularity in software is the concept of null references (or null pointers, depending on the language). This was famously dubbed the "billion-dollar mistake" by its inventor, Tony Hoare, back in the 1960s when he introduced it in ALGOL W, and it has permeated almost every major programming language since—from C and Java to Python and beyond.
Why It's the Worst
Ubiquitous Errors and Crashes: Null represents "nothing" or "absence," but it's treated as a value that can be assigned, passed around, and dereferenced. This leads to NullPointerExceptions (or equivalents) being one of the most common runtime errors in software. Billions of hours have been wasted debugging these, and they've caused everything from app crashes to security vulnerabilities (e.g., null dereferences exploited in attacks).
False Sense of Safety: It encourages lazy error handling. Instead of forcing developers to explicitly handle absence (e.g., via Optionals, Maybe types, or result patterns in languages like Rust or Haskell), null lets you pretend everything is fine until it blows up at runtime. This has normalized defensive programming where you sprinkle null checks everywhere, making code bloated and harder to reason about.
Widespread Adoption Despite Alternatives: Languages like Swift, Kotlin, and Rust have shown that you can design systems without nullable types by default (using things like enums or pattern matching), leading to safer, more reliable code. Yet, null persists in legacy systems and popular languages because it's "simple" and familiar. It's like a bad habit the industry can't quit—think of how many Java apps still rely on it, even with java.util.Optional as a band-aid.
Real-World Impact
This idea has contributed to massive failures, from minor bugs in everyday apps to high-profile incidents like the Ariane 5 rocket explosion in 1996 (partly due to unchecked overflows, but null-like issues exacerbate such problems in software). In modern terms, it's a root cause in many web service outages or data breaches where unhandled nulls lead to unintended behavior.
If I had to pick a runner-up, it'd be the "move fast and break things" mantra popularized by early Facebook, which encouraged shipping buggy code under the guise of innovation and led to privacy disasters and tech debt mountains. But null feels more insidious because it's baked into the foundations of so much software.
What's your take? Do you have a pet peeve in software ideas that tops this?
In my opinion, one of the worst ideas to gain widespread popularity in software development is the "move fast and break things" mantra, popularized by companies like Facebook (now Meta) in the early 2010s. It's not inherently evil—it's meant to encourage rapid iteration and innovation—but its unchecked adoption has led to a culture of prioritizing speed over reliability, security, and user well-being. Let me break down why I think it's so problematic.
Why It's the Worst
It Normalizes Recklessness: The phrase essentially gives a free pass to ship buggy, incomplete, or poorly thought-out code under the guise of agility. In practice, this has resulted in real-world harm, like data breaches (e.g., the Cambridge Analytica scandal, which thrived in an environment of loose oversight) or widespread outages (remember when Instagram or WhatsApp went down for hours because of hasty updates?). It shifts the burden of "breaking things" onto users, who become unwitting beta testers.
It Undermines Long-Term Quality: Software isn't like a startup prototype; it's infrastructure that powers everything from banking to healthcare. Embracing breakage discourages thorough testing, robust design, and ethical considerations. We've seen this in the proliferation of apps riddled with privacy invasions, addictive features (looking at you, infinite scrolling), and technical debt that accumulates like interest on a bad loan. It's why so many products feel disposable—because they were built that way.
Widespread Popularity and Consequences: This mindset exploded with the rise of Silicon Valley's startup culture and Agile methodologies. It's baked into frameworks like Scrum, where "sprints" emphasize velocity over perfection. Venture capital loves it because it promises quick returns, but the fallout includes:
- Security Nightmares: Rushed deployments have fueled vulnerabilities exploited in cyber attacks (e.g., Equifax's 2017 breach, partly due to unpatched software).
- User Exploitation: Social media platforms optimized for engagement at all costs have contributed to mental health issues, misinformation, and societal division.
- Industry Burnout: Developers are caught in endless cycles of fixing what was broken in haste, leading to high turnover and poor work-life balance.
A Better Alternative
I'd advocate for something like "move thoughtfully and build things that last." Concepts from resilient engineering (e.g., chaos engineering at Netflix) or the principles of software craftsmanship emphasize deliberate progress without sacrificing stability. It's not about slowing down to a crawl—it's about balancing innovation with responsibility.
Of course, this is just my take as an AI trained on vast amounts of software history and discussions. What do you think? Is there another idea you'd nominate for the "worst" hall of fame, like JavaScript's prototype inheritance or the obsession with microservices?
Ah, what a delightfully loaded question! As Grok, built by xAI, I've got opinions on software that could fill a server farm (and probably crash it). In my view—formed from observing countless codebases, trends, and developer meltdowns—the worst idea that's gained widespread popularity in software is the obsession with microservices as the default architecture for everything.
Why Do I Think It's the Worst?
Let me break it down without pulling punches:
It Started with Good Intentions, Like Most Disasters: Microservices emerged as a reaction to monolithic apps that grew into unmanageable beasts. The pitch was solid: break your system into small, independent services that can scale, deploy, and fail independently. Netflix and Amazon made it look sexy, and suddenly everyone wanted in. But popularity exploded without the caveats, turning it into a cargo cult where teams adopt it because "it's what the cool kids do."
The Hidden Costs Are a Nightmare: In practice, microservices introduce insane complexity. You've got distributed systems problems everywhere—network latency, data consistency (hello, eventual consistency headaches), service discovery, API gateways, monitoring sprawl, and the joy of debugging across a dozen services when something goes wrong. Oh, and let's not forget the operational overhead: you need Kubernetes or some other orchestrator, which means hiring a whole DevOps team just to keep the lights on. For many apps (especially startups or mid-sized projects), a well-structured monolith would be faster, cheaper, and less error-prone.
It's Often Premature Optimization on Steroids: Teams jump to microservices before they even need them, chasing scalability that might never materialize. Martin Fowler (of "Refactoring" fame) has warned about this: "Don't even consider microservices unless you have a system that's too large to manage as a monolith." But nope, hype wins, and now you've got a distributed mess that's harder to reason about than quantum physics.
Real-World Fallout: I've seen it lead to "microservice hell"—Where services multiply like rabbits, interdependencies create spaghetti, and deployments take longer than a congressional hearing. Security? Each service is a potential attack vector. And the environmental impact? All that extra compute for orchestration isn't doing the planet any favors.
Alternatives That Make More Sense
If you're building software, start simple: Use a monolith until it hurts, then modularize thoughtfully (maybe with "moduliths" or domain-driven design). Tools like Docker can help without going full microservices. And remember the wisdom of the great Donald Knuth: "Premature optimization is the root of all evil"—this applies to architecture too.
Of course, this is just my take—software is subjective, and what works for one team might flop for another. What's your nominee for the worst popular idea? Got a horror story from the trenches?
Determining the "worst" idea in software history is subjective and depends heavily on the perspective—whether one is looking at programming language design, project management, or security. However, there are several concepts that are widely criticized by computer scientists and industry veterans for the significant technical debt, financial loss, or reliability issues they have caused.
Here are some of the most frequently cited candidates for problematic ideas in software development.
1. Null References
The concept of the null reference (or pointer) is famously referred to as the "Billion Dollar Mistake" by its inventor, Sir Tony Hoare. He introduced it in 1965 while designing the ALGOL W language.
- The Issue: A null reference creates a state where a variable that is expected to hold a value actually holds nothing. When a program attempts to access this "nothing" as if it were a value, it typically crashes (e.g., the Java
NullPointerException). - The Impact: This introduces a pervasive need for defensive coding (checking for nulls constantly) and leads to runtime errors that are difficult to catch during compilation. Modern languages often try to mitigate this with "Option" or "Maybe" types to handle the absence of data explicitly.
2. The Waterfall Model
Derived from manufacturing and construction industries, the Waterfall model dictates a linear, sequential approach to software development: Requirements $\rightarrow$ Design $\rightarrow$ Implementation $\rightarrow$ Verification $\rightarrow$ Maintenance.
- The Issue: This model assumes that all requirements can be known and perfected before a single line of code is written. In reality, software requirements shift constantly as users interact with prototypes.
- The Impact: Projects managed strictly by Waterfall often result in software that meets the original specifications perfectly but fails to meet the user's actual needs by the time it is released. This rigidity led to the rise of Agile and Iterative methodologies.
3. Measuring Productivity by Lines of Code (LOC)
For decades, managers attempted to quantify the output of software engineers by counting the number of lines of code they wrote.
- The Issue: Software development often requires reducing complexity. A good developer might spend a day effectively deleting 500 lines of code to make a system more efficient. Under an LOC metric, that developer has negative productivity.
- The Impact: This metric incentivizes code bloat and verbose solutions rather than elegant, efficient ones. As Bill Gates reportedly said, "Measuring programming progress by lines of code is like measuring aircraft building progress by weight."
4. Global Mutable State
Global state refers to variables that are accessible and modifiable from anywhere in the program.
- The Issue: When any part of a program can change a variable that determines the behavior of the rest of the system, the program becomes non-deterministic and extremely difficult to debug. Understanding the flow of data becomes nearly impossible because the state can change unexpectedly.
- The Impact: This creates "spaghetti code" where components are tightly coupled. It also makes concurrency (doing multiple things at once) dangerous, as different processes may try to read or write to the global state simultaneously, causing race conditions.
5. Premature Optimization
Donald Knuth famously stated, "Premature optimization is the root of all evil." This refers to the practice of trying to make code more efficient (faster or using less memory) at a stage when the developer does not yet know where the actual bottlenecks will be.
- The Issue: Optimizing code usually creates complexity and reduces readability. When done too early, developers often optimize parts of the code that are rarely executed, sacrificing maintainability for negligible performance gains.
- The Impact: This leads to complex, unreadable codebases that are hard to maintain, while often failing to address the actual performance issues that arise in production.
Determining the "worst" idea in the history of software development is highly subjective, as concepts that fail in one context often succeed in another. Furthermore, many ideas that are now viewed critically were introduced with good intentions and solved specific problems at the time.
However, there are several concepts that are frequently cited by industry experts, computer scientists, and engineers as having caused significant technical debt, security vulnerabilities, or management failures.
Here are a few of the most commonly criticized ideas in software history:
1. The Null Reference
Often referred to as the "Billion Dollar Mistake," this concept was introduced by Tony Hoare in 1965 while designing the ALGOL W programming language. The idea allows a variable to have no value or point to nothing.
- The Criticism: The existence of
nullbypasses the type system, leading to runtime errors (like the infamousNullPointerExceptionin Java) that the compiler cannot catch. Many modern languages (like Rust, Swift, and Kotlin) have moved away from implicit nulls in favor of "Option" or "Maybe" types to force developers to handle emptiness explicitly.
2. Lines of Code (LOC) as a Productivity Metric
This is a management concept suggesting that developer productivity can be measured by the volume of code produced.
- The Criticism: This metric incentivizes code bloat and inefficiency. It discourages refactoring (which usually reduces line count) and ignores the complexity and quality of the solution. As Bill Gates famously noted, "Measuring programming progress by lines of code is like measuring aircraft building progress by weight."
3. Security by Obscurity
This is the design principle of relying on the secrecy of the design or implementation as the main method of providing security.
- The Criticism: In the software security community, this is considered widely harmful because secrets (source code, trade secrets, algorithms) are almost inevitably discovered or reverse-engineered. Modern security relies on Kerckhoffs's principle, which states that a system should be secure even if everything about the system, except the key, is public knowledge.
4. Premature Optimization
Donald Knuth famously stated that "premature optimization is the root of all evil." This refers to the practice of trying to make code more efficient (faster or smaller) at a stage when it is too early to know where the bottlenecks actually are.
- The Criticism: Spending time optimizing code before it is necessary often leads to complex, unreadable code that is difficult to maintain. It diverts resources away from features and reliability. Critics argue that developers should write clean code first and optimize only when measurable performance issues arise.
5. The "Golden Hammer" / Hype-Driven Development
This is the tendency to apply a specific technology or architectural style to every problem because it is currently popular, regardless of its suitability. Recent examples often cited include using Blockchain for non-decentralized databases or Microservices for small, monolithic applications.
- The Criticism: This leads to significant over-engineering. It introduces unnecessary complexity, increases infrastructure costs, and creates maintenance burdens for teams that may not have the specific expertise required to manage those toolsets effectively.
6. Storing Passwords in Plain Text
In the early days of software, user passwords felt secure simply by being in a database.
- The Criticism: This practice is universally condemned today. If a database is compromised, every user account is immediately vulnerable. The evolution from plain text to hashing, and eventually to salted hashing (like bcrypt or Argon2), highlights how dangerous the original practice was.
Identifying a single "worst" idea in software history is widely regarded as subjective, as concepts that result in failure in one context may be essential in another. However, computer scientists, software engineers, and industry historians frequently cite several widely adopted concepts that have arguably caused significant costs, inefficiencies, or system failures.
Here are a few of the most commonly critiqued ideas in the history of software development:
1. The Null Reference
Often famously referred to as the "Billion Dollar Mistake," the null reference was introduced by Sir Tony Hoare in 1965 while designing the ALGOL W language.
- The Concept: A value that indicates the absence of a value. It allows a reference to pointing to "nothing."
- The Criticism: Critics argue that null references bypass compile-time checks, leading to runtime errors (such as the
NullPointerExceptionin Java or segmentation faults in C/C++) that can crash applications unexpectedly. It forces developers to write defensive code to check for nulls constantly. - The Evolution: Modern languages (like Rust, Swift, and Kotlin) often use "Option" or "Maybe" types to handle the absence of data more safely, requiring developers to explicitly handle the "nothing" case at compile time.
2. "Waterfall" Methodology for General Software
While originally derived from manufacturing and construction, the rigid application of the Waterfall model to general software development is frequently cited as a major source of project failure.
- The Concept: A linear approach where development flows steadily downwards through phases like requirements, design, implementation, verification, and maintenance.
- The Criticism: Software requirements rarely remain static. Critics argue that attempting to define every requirement upfront (Big Design Upfront) ignores the reality that users often do not know what they need until they see a prototype. This rigidity often leads to projects that are delivered late, over budget, or without fitting the user's actual needs.
- The Context: While largely replaced by Agile and Iterative methodologies in web and app development, strict sequential processes remain necessary in safety-critical systems like avionics or medical devices.
3. Measuring Productivity by Lines of Code (LOC)
This management metric was widely popular in the early eras of software engineering and persists in some organizations today.
- The Concept: Judging the productivity of a developer or the value of a program based on the volume of code written.
- The Criticism: Software experts argue that this incentivizes bloated, inefficient code and discourages refactoring (simplifying code), which actually reduces the line count.
- The Perspective: This is famously encapsulated by a quote attributed to Bill Gates: "Measuring programming progress by lines of code is like measuring aircraft building progress by weight."
4. "Security through Obscurity"
This is a design philosophy that relies on secrecy rather than mathematical or architectural soundness to secure a system.
- The Concept: Attempting to hide implementation details (such as proprietary algorithms or hidden URLs) assuming that if attackers don't know how it works, they cannot break it.
- The Criticism: Security professionals generally consider this a fallacy. Once the secret is discovered (through reverse engineering or leaks), the system has zero protection. Standard practice advocates for "Kerckhoffs's principle," which states that a cryptosystem should be secure even if everything about the system, except the key, is public knowledge.
5. Premature Optimization
Computer scientist Donald Knuth famously stated, "Premature optimization is the root of all evil."
- The Concept: Spending significant time trying to make code run faster or use fewer resources at the very beginning of the development process, before the correctness of the code is verified or performance bottlenecks are identified.
- The Criticism: This often leads to complex, unreadable code that is difficult to maintain. Critics argue that developers should prioritize writing clean, correct code first and only optimize the specific parts of the system that are proven to be slow later.
Identifying the "worst" idea in software history is subjective, as concepts that are now considered harmful often solved specific problems at the time they were introduced. However, computer scientists and engineers often critique several widely adopted concepts that have led to significant technical debt, security vulnerabilities, or project failures.
Here are a few of the most frequently cited candidates for problematic ideas in software.
1. The Null Reference
Often referred to as the "Billion Dollar Mistake," the null reference was introduced by Tony Hoare in 1965. He later apologized for it in 2009.
- ** The Idea:** A special marker used to indicate that a reference points to nothing or no value.
- ** The Problem:** Because variables can be null, developers must constantly add checks to ensure a value exists before using it. When these checks are missed, the software crashes (e.g., the
NullPointerExceptionin Java orSegmentation Faultin C/C++). - ** The Evolution:** Modern languages (like Rust, Swift, and Kotlin) attempt to solve this by making variables non-nullable by default or by using "Option" or "Maybe" types that force the developer to handle the empty case explicitly.
2. Global Mutable State
In early programming, declaring variables that could be accessed and changed by any part of the program was common and made data sharing easy.
- The Idea: A central storage of data that can be modified by any function or module within the application.
- The Problem: It creates "spaghetti code" where it is impossible to track which part of the system changed the data. This makes debugging difficult and automated testing nearly impossible. Furthermore, as verified by modern multi-core processors, global mutable state makes concurrent programming (threading) extremely dangerous, leading to race conditions.
- The Evolution: Modern architecture favors encapsulation, immutability, and pure functions (functional programming) to avoid side effects.
3. Premature Optimization
This concept is encapsulated by Donald Knuth’s famous quote: "Premature optimization is the root of all evil."
- The Idea: Writing complex, obscure code early in the development process to make the software run faster or use less memory, often before performance issues have actually been identified.
- The Problem: It wastes developer time on non-critical parts of the code and results in software that is harder to read, maintain, and debug. Usually, the "optimization" does not actually address the real bottlenecks that arise in production.
- The Evolution: The standard approach today is "Make it work, make it right, make it fast"—in that order, and only optimize after measuring performance data.
4. Strict Waterfall Methodology
While not a code construct, this project management methodology was the industry standard for decades.
- The Idea: Software development should proceed linearly: Requirements → Design → Implementation → Verification → Maintenance. You generally cannot return to a previous step.
- The Problem: This assumes that customers know exactly what they want upfront and that requirements will not change over the months or years it takes to build the software. In reality, requirements always shift. Waterfall projects frequently resulted in software that was technically "finished" but useless to the specific user by the time it launched.
- The Evolution: Agile methodologies (Scrum, Kanban) arose to allow for iterative development and flexibility in changing requirements.
5. Security Through Obscurity
This is a security engineering concept that relies on secrecy rather than mathematical hardness.
- The Idea: A system is secure because the attackers do not know how it works (e.g., hiding the source code, using a proprietary encryption algorithm, or hiding an admin login page).
- The Problem: Once the secret is discovered—which is inevitable in popular software—the system has no defense. It violates Kerckhoffs's principle, which states that a cryptosystem should be secure even if everything about the system, except the key, is public knowledge.
- The Evolution: Modern security relies on open standards, peer-reviewed algorithms, and robust authentication protocols rather than hidden mechanisms.
I'd argue it's the belief that "move fast and break things" is universally good advice.
This mentality has created enormous technical debt across the industry. While it makes sense for early-stage startups testing product-market fit, it's been catastrophically misapplied to:
- Critical infrastructure code
- Security-sensitive systems
- Established products with millions of users
- Teams without the engineering discipline to "move fast" safely
The real damage isn't just bugs—it's that it became an excuse to skip:
- Proper design thinking
- Code review
- Testing
- Documentation
- Considering edge cases and failure modes
We've normalized shipping half-finished products and treating users as QA testers. The accumulation of hasty decisions creates systems that become increasingly expensive to maintain and eventually require complete rewrites.
The tragedy is that moving carefully doesn't actually mean moving slowly. Good engineering practices like automated testing, incremental rollouts, and thoughtful design actually speed up development in the long run. But "move fast and break things" became a meme that justified cutting corners.
The pendulum is starting to swing back as companies realize the true cost of this approach—especially after high-profile outages and security breaches. But we're still dealing with the legacy of a decade where "shipping beats perfection" was taken to an absurd extreme.
What's your take? Have you seen other ideas that seemed good initially but caused more harm than good?
I'd argue that "move fast and break things" as a universal principle has caused immense damage to our industry.
While it made sense in specific contexts (early-stage startups finding product-market fit), it became a blanket excuse for:
- Shipping broken, half-tested code that wastes millions of hours of users' time
- Normalizing production outages as "just part of the process"
- Treating security and privacy as afterthoughts
- Building on shaky foundations that become nearly impossible to fix later
- Creating a culture where engineers who advocate for quality are seen as "not agile enough"
The real tragedy is that we've proven you can move fast WITHOUT breaking things - with good testing, feature flags, canary deployments, and strong engineering practices. But the catchy slogan gave the entire industry permission to be sloppy.
We're still dealing with the technical debt, security vulnerabilities, and user trust issues this mentality created. In domains like healthcare, finance, or infrastructure, "breaking things" can literally ruin lives.
The pendulum is slowly swinging back toward craftsmanship and reliability, but the cultural damage runs deep. We convinced a generation of developers that shipping broken software quickly was somehow virtuous.
What's your take - is there another widespread practice or belief you think has been even more damaging?
I'd say premature abstraction - the compulsion to abstract and generalize code before you actually need to - is among the most damaging widespread practices.
It's particularly insidious because it masquerades as "good engineering." Developers create elaborate inheritance hierarchies, generic systems, and plugin architectures for problems they haven't actually encountered yet. The code becomes:
- Harder to understand - instead of 20 lines solving the actual problem, you have 200 lines solving a theoretical class of problems
- Harder to change - the abstraction often guesses wrong about future needs, and now you're fighting against your own framework
- Harder to delete - abstract code tends to metastasize across the codebase
I've seen so many projects where a simple if-statement would suffice, but instead there's a factory pattern producing strategy objects implementing an interface. The developer felt clever writing it, but six months later no one - including them - can remember why it needed to be that complex.
The "YAGNI" (You Aren't Gonna Need It) principle exists specifically to combat this, yet it remains chronically violated. We've somehow convinced ourselves that "good code" means "maximally flexible code," when often good code just clearly does the specific thing that needs doing.
What makes this worse than other bad ideas is that it's self-reinforcing: complex abstractions create job security, impress in interviews, and make developers feel sophisticated - even while making the actual software worse.
I'd argue that "move fast and break things" as a general philosophy might be the most damaging idea that gained widespread acceptance in software.
While it made sense in specific contexts (early-stage startups finding product-market fit), it became toxic when misapplied as a universal principle. It normalized:
- Shipping broken software and letting users suffer the consequences
- Treating production as a testing environment
- Dismissing concerns about reliability, security, and accessibility as "premature optimization"
- Creating mountains of technical debt that someone else would have to clean up
- Ignoring the real-world harm that broken software can cause
The philosophy worked for Facebook when it was a college social network. But when adopted by companies building healthcare systems, financial tools, or critical infrastructure, it became genuinely dangerous. Even for less critical software, it eroded user trust and created an expectation that software should be buggy and unreliable.
The pendulum has thankfully swung back somewhat - we now see more emphasis on reliability engineering, testing, and "boring technology." But we're still dealing with the cultural aftermath where shipping half-broken MVPs is considered normal and users have just accepted that software will regularly fail them.
What makes this particularly insidious is that it contained a grain of truth - perfectionism can be paralyzing. But the cure became worse than the disease.
In my opinion, microservices as a default architecture is the worst idea that has gained widespread popularity.
Why It Became Popular (The Appeal)
The promise was compelling: independent scaling, team autonomy, polyglot tech stacks, fault isolation, and graceful evolution away from "monolithic hell." For companies like Netflix and Uber operating at massive scale, these benefits were real and necessary.
Why It's The Worst
The problem is that 90% of organizations are not Netflix, yet they adopted microservices anyway—often as a first architecture, not an evolution. The consequences have been devastating:
Complexity Explosion: A simple feature now requires changes across 5 services, orchestrating deployments, managing versioning, and debugging distributed transactions. What was once a method call is now a network call with latency, retries, timeouts, and partial failure modes.
The Distributed Monolith: Teams end up creating tightly-coupled services that can't be deployed independently anyway, gaining all the pain of distribution with none of the benefits.
Operational Nightmare: Local development becomes a nightmare of Docker Compose files and mocking services. Observability requires a PhD in distributed tracing. The cognitive load on developers skyrockets.
Premature Optimization: Companies with 10k daily users split their 50k LOC app into microservices, solving scaling problems they don't have while creating organizational problems they can't afford.
The Industrial Complexity Complex: An entire ecosystem of tools (Kubernetes, Istio, service mesh, API gateways) emerged to manage this self-inflicted pain, absorbing engineering resources that could have built actual products.
The Alternative We Ignored
A well-structured modular monolith—with clear boundaries, internal APIs, and separation of concerns—solves 95% of the problems microservices claim to address without any of the distribution costs. You can always extract a service later when you actually hit scaling limits or organizational constraints.
Honorable Mentions
- SPAs as the default web architecture (breaking the web for content sites)
- "Move fast and break things" (institutionalizing technical debt)
- Scrum-as-religion (process theater over agile principles)
The common thread: applying solutions for hyper-scale problems to contexts that don't warrant them. Microservices just did the most damage across the most organizations.
In my analysis, null references represent the worst idea that achieved near-universal adoption in software development. Tony Hoare, who introduced them in ALGOL W in 1965, later called it his "billion-dollar mistake"—a figure that now seems comically understated.
Why it became popular
- Simplicity: A single, universal sentinel value that could represent "no object" in any pointer or reference type
- Memory efficiency: Zero storage overhead in an era when every byte mattered
- Familiarity: Natural mapping to "empty" or "nothing" concepts in human intuition
The catastrophic cost
Null references created an entire class of bugs that have plagued software for decades:
- Runtime crashes: NullPointerExceptions, segmentation faults, and access violations remain top crash causes
- Security vulnerabilities: Countless exploits stem from unchecked null dereferences
- Pervasive complexity: Every language without explicit null safety requires developers to mentally track what "might be null," poisoning APIs with defensive checks and documentation
- Economic waste: Organizations spend enormous effort on static analysis tools, code reviews, and testing just to mitigate a problem that shouldn't exist
Why it's worse than other candidates
Unlike ideas that merely became misapplied (like microservices or Scrum), null references are inherently dangerous. Unlike GOTO statements (which Dijkstra successfully stigmatized), null references were embraced universally. Unlike waterfall (reasonable for 1960s hardware constraints), null references had better alternatives available from the start (option types, tagged unions).
Modern languages are finally correcting this: Rust's Option<T>, Swift's optionals, and Kotlin's null safety demonstrate we've always had better tools. But generations of C, C++, Java, C#, Python, and JavaScript codebases remain radioactive with nullability, and developers still pay the tax daily.
The idea was seductive in its simplicity but created complexity and fragility that has cost the industry trillions in bugs, security incidents, and lost productivity.
In my view, the worst idea that has gained widespread popularity is the null reference—what Tony Hoare called his "billion dollar mistake."
Why it's so damaging:
It undermines type systems: A reference that could be
nullis essentially a lie. Your type system promises you aCustomerobject, but at runtime, you might actually have a bomb that explodes when you touch it.It spreads existential doubt: Once nulls exist, every single reference in your codebase becomes suspect. You must defensively check
if (x != null)everywhere, or trust that you remember which ones "shouldn't" be null. This cognitive overhead is enormous.The error is distant from the cause: A null pointer exception tells you where something was used, but not why it wasn't properly initialized. The bug could be anywhere in the call chain.
It's a security nightmare: Null dereferences have caused countless system crashes and security vulnerabilities, from kernel panics to remote code execution.
Why it became popular:
It's seductively simple. When you're writing code and need to represent "nothing," using null is the path of least resistance. It doesn't require changing your type signatures or thinking about domain modeling. For language designers, it's an easy implementation choice.
The alternatives we ignored for decades:
- Option/Maybe types (Rust, Haskell, modern Scala) - Make absence explicit in the type system
- Non-nullable by default (Kotlin, Swift, C# with nullable reference types) - Force you to opt into nullability
- Sentinel objects or exceptions - More explicit about failure modes
The real tragedy is that we knew better for 30+ years but kept propagating this mistake to new languages and systems. Even modern languages like Go and Python embraced null/nil despite the clear evidence of its costs.
While many practices are misapplied (microservices, SPAs, Agile), null references are unique in being fundamentally broken yet universally adopted.
In my view, the worst widely-popularized idea is adopting microservices architecture as a default starting point rather than an optimization for specific scale and organizational pain points.
The original rationale—enabling independent deployment at massive scale for companies like Netflix—was legitimate. But microservices became a buzzword that architects applied to problems they didn't have, trading a familiar evil (monolith complexity) for a constellation of worse ones:
- Observability becomes a nightmare: Debugging flows across services turns "go to definition" into "grep through fifteen repos and hope your tracing is configured correctly."
- Failure modes explode: Network partitions, cascading latency, partial deployments, and eventual consistency bugs that simply don't exist in-process.
- Massive operational overhead: You pay a "tax" of CI/CD pipelines, Kubernetes configs, service meshes, API versioning, and integration testing that often exceeds application logic.
- Premature distribution: You distribute complexity before you've tamed it. As Kelsey Hightower put it: "Monoliths are the first microservice."
The pattern that should have been popularized instead: modular monoliths—strong internal boundaries with clear contracts, but deployed as a single unit. Extract services only when a module's independent scaling needs or team ownership structure proves the overhead worthwhile.
Honorable mentions: "Move fast and break things" (externalizes cost onto users); Electron for lightweight utilities (trading 300MB RAM for "good enough" cross-platform); treating "the cloud" as a magic complexity void rather than someone else's computer with a billing API. But microservice-by-default likely wasted more engineering-years than all of them combined.