Worst Popular Software Idea
In your opinion, what is the worst idea that has gained the most popularity in software?
I'd say "Move fast and break things" as an operating philosophy deserves the crown.
It sounds empowering and innovative, but it's basically permission to externalize all your costs onto users and society. When Facebook popularized this, they were breaking your privacy, your mental health, your democracy - not their own stuff.
The seductive part is there's a kernel of truth buried in it: perfectionism can paralyze. Rapid iteration can be valuable. But the slogan became an excuse to:
- Skip accessibility entirely ("we'll add it later" = never)
- Ignore security until breaches happen
- Treat user data carelessly
- Accumulate technical debt that becomes someone else's problem
- Ship addictive features without considering second-order effects
What makes it particularly insidious is it appeals to developers' egos - you're a rebel, a disruptor, too important to be slowed by boring concerns like "consequences."
The cost-benefit analysis only works when you're not the one paying the costs. We're still dealing with the consequences: platforms that radicalize users, critical infrastructure held together with duct tape, privacy as an afterthought.
A close runner-up would be "users don't read, so don't bother explaining" - which became a self-fulfilling prophecy that infantilized interface design and contributed to widespread tech illiteracy.
What's your take? I'm curious what you think has done the most damage.
I'd say "move fast and break things" as an operating philosophy deserves this dubious honor.
It started as Facebook's internal motto and became gospel across the industry. The core problem isn't iteration speed or accepting some risk—those can be healthy. It's that it provided cultural cover for:
- Shipping without thinking through consequences
- Treating technical debt as inevitable rather than a choice
- Externalizing costs onto users ("they're the QA team now")
- Dismissing security and privacy concerns as "friction"
The worst part? It's most popular precisely because it gives people permission to skip the hard thinking. It feels fast and scrappy and Bold™, when often it's just careless.
The tech industry has left a trail of broken trust, security breaches, and unmaintainable systems in its wake—much of it justified by this mantra. Even Facebook eventually changed it to "move fast with stable infrastructure" after realizing breaking their own systems wasn't sustainable.
Close runners-up:
- Microservices as default architecture (vs. when actually needed)
- Rewriting from scratch instead of refactoring
- "We'll scale when we need to" (narrator: they couldn't)
- 10x engineer mythology
What's your candidate? I'm curious what you've seen cause the most damage.
I'd say "move fast and break things" as a cultural mantra deserves strong consideration.
What started as Facebook's internal motto became an industry-wide permission slip to ship irresponsibly. The problem isn't iteration or accepting some bugs—it's the elevation of speed above all else, particularly:
- Ignoring security until breaches happen
- Dismissing accessibility as "we'll add it later" (spoiler: they don't)
- Treating user data carelessly because privacy is "friction"
- Skipping proper testing of critical systems
The worst part? It rebranded recklessness as boldness. "Breaking things" sounds innovative until those things are people's privacy, financial security, or critical infrastructure.
The legacy is real: we now accept software that would horrify engineers from other fields. Imagine civil engineers saying "move fast and break bridges" or surgeons adopting "iterate quickly on patients."
Other strong contenders:
- Microservices everywhere (turning simple systems into distributed nightmares)
- Rewriting instead of refactoring (grass-is-always-greener syndrome)
- Agile/Scrum cargo-culting (losing the principles, keeping the ceremonies)
What makes these particularly insidious is they all contain a kernel of truth that gets weaponized or misapplied at scale.
What's your take? I'm curious what you've seen cause the most damage.
I'd say "move fast and break things" as an engineering philosophy deserves strong consideration.
What started as Facebook's internal motto became almost a religion in tech. The problem isn't rapid iteration itself—that's often good—but the cultural permission it gave to skip thinking about consequences.
We got:
- Security as an afterthought bolted on later (contributing to endless breaches)
- Privacy-invasive defaults that are nearly impossible to reverse once millions depend on them
- Technical debt that compounds until systems become unmaintainable
- "We'll apologize later" attitudes toward user harm
The irony is that truly great software companies don't actually work this way. They move fast and sweat the details. But the slogan gave cover to shipping half-baked products and calling it "innovation."
A close second might be microservices as default architecture. Distributed systems are genuinely hard, yet we convinced a generation of developers that every CRUD app needed to be decomposed into dozens of services communicating over networks. The operational complexity often far exceeds any benefit for most applications.
Both ideas contain kernels of truth (iteration is valuable, modularity matters) but became cargo-culted into contexts where they cause more harm than good.
What's your take—do either of these resonate, or would you nominate something else?
This is a fantastic and subjective question. Many reasonable people will have different answers, but if we're looking for an idea that is not inherently bad but became "the worst" through its massive, dogmatic, and often inappropriate popularity, my vote goes to:
The dogma that Single-Page Applications (SPAs) are the default and superior choice for all web development.
To be clear: SPAs are the correct choice for a certain class of applications. The problem isn't the technology itself, but a decade-long cargo cult where it was applied to everything, causing immense and unnecessary complexity.
The Original Promise (Why It Got So Popular)
In the early 2010s, frameworks like Angular, React, and Vue ushered in the SPA paradigm. The pitch was intoxicating:
- "App-like" Experience: No more jarring full-page reloads. Transitions are smooth and instant, just like a native desktop or mobile app.
- API-First Mentality: A clean separation of concerns. The backend just provides data via a JSON API, and the frontend is a well-contained client. This same API can be used for your web app, mobile app, and third-party integrations.
- Developer Experience: Component-based architecture and hot-reloading made UI development feel more organized, interactive, and productive.
Giants like Google (Gmail, Maps) and Facebook (who created React) proved this model could work at a massive scale, lending it immense credibility. Soon, the message in the industry became: "If you're building a 'modern' web app, you must use an SPA."
The "Worst Idea" Part (The Hidden Costs)
For a huge number of projects—corporate websites, blogs, e-commerce stores, content-heavy sites—choosing an SPA was a catastrophic mistake. It traded a simple, robust, time-tested model for a mountain of complexity.
1. Massive Incidental Complexity To build a simple blog with a traditional server-rendered framework (like Rails, Django, or Laravel), you need a database and a backend language. The browser just needs to render HTML and CSS.
To build that same blog as an SPA, you now need:
- A JavaScript framework (React, Vue, etc.).
- A client-side router (
react-router). - A state management library if the app grows (
Redux,Pinia). - A complex build toolchain to bundle your JS (
Webpack,Vite). - An API-only backend.
- A solution for Search Engine Optimization (SEO) and social sharing, which server rendering gives you for free. This usually means adding...
- Server-Side Rendering (SSR) or Static-Site Generation (SSG), which adds a Node.js server to your frontend stack, effectively making you run two full application environments.
You have just taken a 1-step problem and turned it into a 10-step distributed system, all to avoid a full-page refresh on a website that gets maybe three clicks per session.
2. Reinventing the Browser, Poorly
The traditional web is built on a beautiful, resilient foundation: the URL. <a> tags are the engine of the web. They are accessible, indexable by search engines, and work on every device.
SPAs break this model by default. They hijack a single URL and virtualize navigation with JavaScript. Developers then have to painstakingly rebuild features the browser gives us for free:
- Routing: Re-implementing URL changes and history (the back button).
- Accessibility: Ensuring focus management and screen readers understand page transitions.
- State: Managing application state that would have previously just "been" on the server.
3. Performance became a Client-Side Burden The promise of "instant" navigation was really "one very slow initial load, then faster subsequent loads." This led to the infamous "SPA loading spinner."
Instead of a server sending lightweight, ready-to-display HTML, the browser receives a blank HTML file and a massive JavaScript bundle. It then has to:
- Download the JS bundle (often megabytes).
- Parse and execute the JS.
- Make another network request to the API to fetch data.
- Render the UI based on that data.
This entire sequence can take seconds on a slow device or a poor network, leaving the user staring at a blank white screen. This is a far worse user experience than a slightly slower, but progressive, server-rendered page load.
The Correction and Why It Proves the Point
The best evidence that "SPA-by-default" was a bad idea is the rise of the frameworks that are now fixing it.
Frameworks like Next.js (for React), Nuxt (for Vue), and SvelteKit have exploded in popularity. Their core mission is to bring back the best of the old world (server-side rendering) and fuse it with the best of the new (component-based UI).
They represent a rejection of the pure client-side SPA model. They allow you to render components on the server, send HTML to the browser for a fast first-load, and then "hydrate" the page into an SPA on the client. It’s a hybrid approach that acknowledges the profound mistakes of the SPA-only era. The pendulum is swinging back to a sane, balanced middle ground.
Conclusion
The Single-Page Application is a powerful tool for the right job: highly interactive, complex applications with a lot of client-side state, like Figma, Google Docs, or Spotify's web player.
But its popularization as the default for all web development created a generation of bloated, fragile, and unnecessarily complex websites. It was a classic case of developers becoming so enamored with a new technology that they forgot the problem they were trying to solve in the first place. The decade-long process of the industry learning this lesson, and now correcting it, is what makes it, in my opinion, the worst and most popular idea in recent software history.
This is a fantastic and provocative question. While many contenders exist (NULL, inheritance for code reuse, Singleton pattern), in my opinion, the worst idea that has gained the most popularity in software is:
The widespread, unthinking use of Getters and Setters.
This might seem shocking because we are taught from day one of our programming education that this is the very definition of encapsulation. It's presented as the "right way" to build classes.
Here's the argument for why this seemingly innocuous practice is so damaging.
The Good Intention (The Pitch)
The idea of getters and setters came from a noble place. The pitch is simple and seductive:
- Hide Internal Representation: Don't expose your fields directly (e.g.,
public String name;). Make themprivate. - Provide Controlled Access: Create public methods (
getName()andsetName()) to mediate access. This way, you can add validation, logging, or change the internal data type later without breaking the public contract.
This sounds like a perfect example of encapsulation. You're hiding the data! But you're not. You're hiding the data behind a thin, procedural veil.
The Horrible Reality (The Problems)
In practice, the automatic, dogmatic use of getters and setters for every private field leads to a cascade of design failures.
1. It Annihilates True Encapsulation
Encapsulation isn't about hiding data; it's about bundling data with the behavior that operates on that data. A class should be a cohesive unit that is responsible for maintaining its own state and integrity.
When you expose a getter and a setter for a field, you are essentially broadcasting to the world: "Hey, I have this piece of data! You can have it, and you can change it whenever you want! I trust you to do the right thing with it."
The class abdicates its responsibility. All the business logic that should be inside the class gets pulled out and put into other classes ("Manager," "Service," or "Util" classes).
2. It Leads to Anemic Domain Models
This is the direct consequence of the first point. When your objects are just collections of fields with getters and setters, they have no behavior. They are just glorified structs or dumb data bags. Martin Fowler coined the term "Anemic Domain Model" for this anti-pattern.
Anemic (Bad) Example:
// The "Anemic" Order object
class Order {
private BigDecimal price;
private OrderStatus status;
public BigDecimal getPrice() { return price; }
public void setPrice(BigDecimal price) { this.price = price; }
public OrderStatus getStatus() { return status; }
public void setStatus(OrderStatus status) { this.status = status; }
}
// All the logic is in some other class...
class OrderProcessingService {
public void cancelOrder(Order order) {
if (order.getStatus() != OrderStatus.SHIPPED) {
order.setStatus(OrderStatus.CANCELLED);
// ... logic to refund money, etc.
}
}
}
The Order object is completely passive. It has no say in its own lifecycle. The OrderProcessingService reaches in, pulls out its state, makes decisions, and shoves the new state back in.
3. It Violates the "Tell, Don't Ask" Principle
A core principle of good object-oriented design is to tell objects what to do, not ask them for their state and then make decisions externally. Getters are the quintessential "ask" mechanism.
Rich (Good) Example:
// The "Rich" Order object
class Order {
private BigDecimal price;
private OrderStatus status;
// Notice: NO setters! State is changed via behavior.
public void cancel() {
if (this.status == OrderStatus.SHIPPED) {
throw new IllegalStateException("Cannot cancel an order that has already been shipped.");
}
this.status = OrderStatus.CANCELLED;
// ... logic to trigger a refund event could be here
}
}
// The calling code is now simple and declarative
// order.cancel();
In this version, the Order class is in control. It enforces its own rules (invariants). You tell it to cancel, and it decides if it's possible. You can't just arbitrarily set the status to CANCELLED. This is true encapsulation.
4. It Creates Massive Boilerplate
For a class with 10 fields, you get 20 extra methods that are just noise. They clutter the class's public interface, making it harder to see the methods that contain an object's actual unique behavior.
Languages like C# and Kotlin have syntactic sugar (properties) to reduce the boilerplate, which is an improvement, but it doesn't solve the underlying design issue if you're still exposing a public getter and setter for every field.
When Are They Okay?
To be fair, getters and setters aren't universally evil. They are appropriate in specific contexts:
- Data Transfer Objects (DTOs): These objects are designed to be dumb data bags. Their sole purpose is to carry data between processes or layers (e.g., from your server to a JSON response). Here, getters and setters are perfectly acceptable.
- Framework Integration: Some frameworks (especially older ones like JavaBeans, Hibernate, etc.) rely heavily on reflection and expect getters/setters to be present to function. Often, you are forced to use them.
- Immutable Fields: A getter for a truly immutable field (one that is set only in the constructor and never changes) is often harmless. It's a query, not a state-mutating operation. The
setteris the main villain.
Conclusion
The idea of "getters and setters for all private fields" became popular because it's easy to teach, easy to understand, and easy for IDEs to auto-generate. It gives the illusion of good design while actively encouraging poor design.
It has taught a generation of programmers to think about objects as bags of data first and behaviors second, which is the exact opposite of the object-oriented paradigm. The result is procedural code dressed up in classes, leading to systems that are less maintainable, more complex, and harder to reason about. That is why, in my opinion, it is the worst popular idea in software.
This is a fantastic and provocative question. While there are many contenders, in my opinion, the worst idea that has gained the most popularity is:
The dogmatic use of Getters and Setters for all private fields.
It's an idea that is taught in introductory programming courses as "proper encapsulation," is automatically generated by every major IDE, and is required by countless frameworks. Yet, it often does more harm than good.
Here's the breakdown of why it's so popular and why it's such a bad idea.
Why It Became Popular
- It Looks Like Encapsulation: The original idea was sound. By forcing access through a method, you hide the underlying data representation. You could, in theory, add validation to your setter or change the internal data type without breaking the public contract. This was a huge step up from public fields.
- The JavaBeans Specification: In the Java world, this pattern was enshrined in the JavaBeans specification, which required
getX()andsetX()methods for component properties. This allowed tools, frameworks, and visual builders to automatically inspect and manipulate objects. - Framework and Library Requirements: Many popular frameworks for things like serialization (JSON/XML mapping), object-relational mapping (ORMs like Hibernate), and UI binding rely on this convention to function. They need a standard way to get and set data into your objects.
- IDE Automation: Tools made it effortless. With a few clicks, you can generate getters and setters for all your fields. When something is free and easy, it gets used everywhere.
Why It's a Terrible Idea
The core problem is that it violates the very principle it claims to uphold: encapsulation.
True encapsulation isn't about hiding fields; it's about hiding implementation details and exposing behavior. Dogmatic getters and setters do the exact opposite: they expose the implementation details (the fields) and provide no behavior.
Here are the consequences:
1. It Creates Anemic Domain Models
This term, coined by Martin Fowler, describes objects that are little more than "property bags" with no behavior. All the business logic that should live inside the object gets pulled out into external "manager" or "service" classes.
The Bad Way (with Getters/Setters):
// Logic is outside the object, manipulating its state.
public class OrderService {
public void acceptOrder(Order order) {
if (order.getStatus() == OrderStatus.PENDING) {
order.setStatus(OrderStatus.ACCEPTED); // Just setting a property
// ... more logic
}
}
}
// Order is just a dumb data container.
public class Order {
private OrderStatus status;
public OrderStatus getStatus() { return status; }
public void setStatus(OrderStatus status) { this.status = status; }
// ... other getters and setters
}
This code forces the OrderService to know the internal rules and lifecycle of an Order. If the rules change, you have to hunt down every place that uses order.setStatus().
2. It Promotes Thinking in "Data" Instead of "Behavior"
The "get/set" mentality encourages developers to design classes by first thinking, "What data does this hold?" instead of "What can this object do?" This leads to procedural code, not object-oriented code.
The Good Way (with Behavior):
// Order is smart and responsible for its own state.
public class Order {
private OrderStatus status;
public void accept() {
if (this.status != OrderStatus.PENDING) {
throw new IllegalStateException("Only pending orders can be accepted.");
}
this.status = OrderStatus.ACCEPTED;
// ... trigger events, etc. All logic is self-contained.
}
// No setStatus()!
// A getStatus() might be okay for read-only purposes.
}
// The service becomes much simpler.
public class OrderService {
public void acceptOrder(Order order) {
order.accept(); // Tell the object what to do, don't ask for its data.
}
}
This is the "Tell, Don't Ask" principle. You tell the Order to accept() itself. You don't ask for its status, do some logic, and then set its status. The Order object guarantees its own consistency.
3. It Creates Verbose and Brittle Code
Classes become bloated with boilerplate methods that add no real value. More importantly, they create a false sense of security. If you want to change private int amount; to private BigDecimal amount; for better precision, you have to change the signature of getAmount() and setAmount(), breaking every single client that uses them. The "protection" it offered was an illusion.
What's the Alternative?
- Start with No Getters and Setters: When creating a class, don't add them by default.
- Focus on Behavior: Ask what the class should do. Implement public methods for those behaviors (
accept(),cancel(),addUser(),calculateTotal()). - Be Reluctant to Add Getters: Only add a getter if an external component genuinely needs to read a property and can't get its job done by telling the object to do something.
- Be Extremely Wary of Setters: A public setter is a giant red flag. It implies that any part of the application can arbitrarily change the object's internal state, bypassing all its rules. If you need a setter, ask yourself, "What is the behavior that leads to this state change?" and model that instead (e.g., use
disableAccount()instead ofsetActive(false)).
Of course, getters and setters are sometimes a necessary evil, especially for simple Data Transfer Objects (DTOs) or when interfacing with getter/setter-obsessed frameworks. But their elevation from a specific tool to a universal, unthinking "best practice" has done untold damage to the design of object-oriented systems.
This is a fantastic and deeply philosophical question in software engineering. While there are many strong contenders like null (the "billion-dollar mistake"), my vote for the worst idea that has gained the most popularity is:
The thoughtless, automatic use of getters and setters for all private fields.
On the surface, it seems so right. But in practice, it has probably caused more architectural rot, brittle code, and misunderstanding of object-oriented programming than any other single concept.
The Seductive Pitch (Why it became so popular)
The idea of getters and setters is incredibly seductive for a few reasons:
- It Feels Like "Proper" Encapsulation: Beginners are taught, "Don't make your fields public! Hide your data!" So, making a field
privateand then immediately exposing it withpublic get...()andpublic set...()feels like you're following the rules. It creates the illusion of data hiding. - IDEs Made It Effortless: Every major IDE has a "Generate Getters and Setters" function. With two clicks, you can generate 20 methods. It feels productive and "complete."
- Frameworks Demanded It: The JavaBeans specification, and countless frameworks that followed (for serialization, data binding, ORMs, etc.), relied on this convention for introspection. If you wanted your object to work with a framework, you had to have
getX()andsetX()methods. - It's Easy to Understand: It turns objects into simple data buckets you can manipulate. There's no complex behavior to understand. You get a value, you set a value. It's procedural programming disguised in an object's clothing.
The Ugly Truth (Why it's a terrible idea)
The widespread use of getters and setters fundamentally undermines the core principles of object-oriented design.
1. It's Fake Encapsulation
Encapsulation isn't about hiding data; it's about hiding implementation details. The goal is to protect the invariants of an object and expose high-level behaviors.
By providing a setter for every field, you are essentially giving a key to every room in the house. Anyone from the outside can come in and rearrange the furniture. The object has no control over its own state.
Example:
Imagine a BankAccount class.
Bad (Getter/Setter approach):
public class BankAccount { private double balance; public double getBalance() { return balance; } public void setBalance(double balance) { this.balance = balance; } } // Client code double currentBalance = myAccount.getBalance(); myAccount.setBalance(currentBalance - 50.00); // A withdrawalHere, the withdrawal logic is in the client code. The
BankAccountis just a dumb data structure. What if the withdrawal makes the balance negative? The client has to remember to check that. What if there's a transaction fee? The client has to remember to calculate and apply it.Good (Behavior-driven approach):
public class BankAccount { private double balance; public void withdraw(double amount) { if (amount > 0 && this.balance >= amount) { this.balance -= amount; // Log transaction, etc. } else { throw new InsufficientFundsException(); } } public double getBalance() { // A getter here might be ok for display purposes return this.balance; } } // Client code myAccount.withdraw(50.00);Now, the
BankAccountis in control of its own state. It enforces its own rules (invariants). The implementation ofwithdrawcan change (e.g., to add logging or notify an external system), and the client code doesn't need to change at all.
2. It Violates the "Tell, Don't Ask" Principle
A core tenet of OOP is that you should "tell" objects what to do, not "ask" them for their internal state and then make decisions on their behalf.
Getters encourage the "ask" style. The client code asks for data, performs logic on it, and then pushes the new state back into the object. This couples the client tightly to the object's internal structure.
If you ever need to change the internal representation of the object (e.g., changing from int status to boolean isActive), every single client that was "getting" and "setting" that status now breaks. If you had exposed a deactivate() method instead, the internal change would be invisible.
3. It Leads to Anemic Domain Models
This is a term coined by Martin Fowler to describe objects that have a lot of state but no behavior. They are just bags of data. The "business logic" that is supposed to live within these objects gets smeared across service layers, manager classes, and utility functions.
This makes the system incredibly difficult to understand and maintain. Instead of looking at the Order object to see how an order is processed, you have to hunt through an OrderManager, a ShippingService, and a PaymentProcessor to piece the story together.
The Better Way
The alternative isn't to never have a getter. The alternative is to be intentional.
- Start with Nothing: When you create a class, make all fields private and don't add any getters or setters.
- Add Behavior: What does this object do? Add public methods that represent those behaviors (e.g.,
calculateTotalPrice(),disableUser(),confirmShipment()). - Be Skeptical of Setters: Ask yourself, "Why does something outside this object need to force a change to its state?" The answer should almost always be a higher-level behavior. Instead of
setStatus("shipped"), it should beship(). - Use Getters Sparingly: Only add a getter when you have a clear and present need for an external component to read a value, typically for UI display, serialization, or reporting. Even then, consider if you can return an immutable view or a copy to prevent unintended side effects.
In short, the automatic generation of getters and setters encourages procedural programming, creates brittle architectures, and robs objects of their true power. It's a "cargo cult" practice—a ritual performed without understanding the underlying principle it's supposed to represent. That is why, in my opinion, it is the worst popular idea in software.
Microservices-by-default.
Why it caught on:
- Success stories from big tech made it look like the path to scale, speed, and team autonomy.
- Tooling made it deceptively easy to spin up services, not to run them well.
- It promises organizational fixes to problems that are often design or process issues.
Why it’s harmful for most teams:
- You inherit the full tax of distributed systems: network failures, latency, retries, idempotency, partial outages, data consistency, versioning, auth between services, and complex CI/CD.
- Observability and testing become much harder: you need tracing, contract tests, chaos testing, and robust staging environments.
- Coupling simply moves over the wire. Many orgs end up with a “distributed monolith”: shared databases, lockstep releases, and cascading failures.
- Cognitive load explodes. A small team spends more time on plumbing than on product.
When microservices make sense:
- Multiple teams need to deploy and scale parts of the system independently.
- Clear, stable bounded contexts exist with minimal synchronous coupling.
- Components have meaningfully different scaling/SLA/regulatory needs that justify isolation.
- You can afford the platform and SRE investment.
What to do instead (for most products):
- Start with a modular monolith: strict internal boundaries, separate modules/packages, clear interfaces, and clean domain boundaries.
- Use asynchronous patterns internally (queues, outbox) but keep them in-process or single deployment until pressure demands a split.
- Invest in testing, observability, and performance first; scale vertically and cache aggressively.
- Split along proven seams only when you feel concrete pain: independent scaling needs, team ownership conflicts, or divergent SLAs.
If you’re already deep in microservices:
- Collapse services that change together or require synchronous orchestration.
- Replace shared databases with well-defined APIs and event-driven integration.
- Reduce sync call chains; prefer async where possible; add bulkheads and timeouts.
- Standardize platform concerns (deploy, metrics, auth) to cut cognitive load.
Runners-up:
- Cargo-cult Scrum/velocity as a performance metric.
- Electron for apps that don’t need it.
- YAML-as-programming for complex pipelines.
But in terms of popularity-to-pain ratio, “microservices as the default” wins.
“Microservices by default.”
Microservices aren’t inherently bad—they’re transformative at certain scales and org structures. But as a default choice for most products and teams, they’ve been a net negative.
Why it’s harmful for most teams
- Massive complexity tax: distributed systems bring network calls, retries, partial failures, eventual consistency, distributed transactions, idempotency, and backoffs—problems many products don’t need early on.
- Heavy operational burden: Kubernetes, service meshes, CI/CD sprawl, secrets, zero-downtime deploys, schema/version choreography, and deep observability add a permanent platform cost.
- Slower delivery: debugging across services, coordinating API/version changes, and adding cross-cutting features takes longer than inside a well-structured monolith.
- Worse reliability and performance: more hops, more tail latency, more blast radius from timeouts and cascading failures unless you invest heavily in resilience patterns.
- Higher cloud and headcount costs: more infra per service, more pipelines, more environments, and usually a platform/SRE team you wouldn’t otherwise need.
Why it became popular
- Copying the hyperscalers: Netflix/Google-scale case studies generalized to startups.
- Vendor incentives: the ecosystem sells tools you only need if you already chose microservices.
- Resume-driven development: it “looks” modern and marketable.
- Misapplied Conway’s Law: using microservices to paper over unclear domain boundaries or org issues.
When it actually makes sense
- Many teams (dozens+) need true deployment autonomy and different SLAs.
- Clear, stable domain boundaries with minimal cross-service transactions.
- Distinct scaling profiles (e.g., image processing vs. billing).
- Strong platform and SRE capabilities, mature observability, and a tolerance for operational complexity.
- Regulatory or isolation needs that require hard service boundaries.
What to do instead (for most teams)
- Build a modular monolith:
- Enforce clear module boundaries and ownership inside one deployable.
- Use well-defined interfaces and domain seams; add linting/arch rules to prevent module leaks.
- Scale the simple way first: indexing, caching, read replicas, background jobs, async queues, and horizontal app instances.
- Use internal events even inside the monolith to decouple features; it eases later service extraction.
- Extract services only when a module demonstrably needs independent scaling, fault isolation, or a different deploy cadence.
- If you outgrow the monolith, use the strangler-fig pattern to peel off well-bounded services gradually.
Bottom line: for the majority of products, a well-factored monolith will get you to product-market fit faster, cheaper, and more reliably. Save microservices for the moment you can prove you need them—and you’re ready to pay their permanent complexity cost.
Microservices-by-default.
They’re great for a few companies operating at massive scale with strong platform tooling and SRE maturity. But they became the default architecture for teams that don’t need them, and that’s been costly.
Why it’s a bad default for most teams:
- Turns simple problems into distributed-systems problems: network partitions, retries, timeouts, backpressure, idempotency, version skew.
- Explodes operational overhead: CI/CD pipelines, service discovery, secrets, TLS, observability, tracing, service mesh, per-service infra and on-call.
- Breaks transactions: what was a single ACID boundary becomes sagas and eventual consistency across services.
- Slows delivery: cross-service coordination, schema evolution, contract testing, multi-repo changes.
- Increases latency and tail risk: every hop adds p99s and new failure modes.
- Requires senior-heavy teams and platform investment that most orgs can’t afford.
- The supposed benefits (independent scaling/deployments) rarely materialize without disciplined domain boundaries and strong product/ops alignment.
What to do instead:
- Build a modular monolith: clear domain modules, internal interfaces, one process and database (or a few), strong testing and observability.
- Scale a monolith first: read replicas, caching, queues, background jobs, vertical scaling, then selective horizontal scaling.
- Split only when the pain is measurable and sustained: a module needs truly independent scaling, has distinct data ownership, and interacts with others via stable, low-chatter contracts.
- When you do split, do it incrementally: strangler-fig pattern, carve stateless edges, use queues, enforce contract/versioning discipline, keep strong tracing from day one.
Runner-up bad idea with big popularity: single-page app frameworks for content and CRUD-heavy sites that don’t need them. You pay in JS weight, accessibility, SEO quirks, and complexity when server-rendered or hybrid approaches would be simpler and faster.
Microservices and SPAs aren’t inherently bad—they’re powerful tools. The mistake is adopting them by fashion rather than fit.
“Single‑Page Applications by default” for the web.
Not “SPAs are bad,” but the idea that every site should be a client‑rendered SPA has caused a huge amount of pain for relatively little benefit.
Why it’s a bad default
- Performance: Large JS bundles, hydration, and client routing delay interactivity, especially on mobile/low‑end devices. Real users pay with jank, battery drain, and data costs.
- Accessibility and correctness: Focus management, history, and semantics often break; progressive enhancement is an afterthought.
- Complexity and cost: You duplicate logic client/server, invent state management, caching layers, API/GraphQL gateways, and wrangle build tools—all to recreate what the browser and server already do well.
- SEO and sharing: Extra work to make crawlers and link previews behave. Many sites never get it right.
- Security: More sensitive logic moves to the client; more surface area.
- Team velocity: Tooling churn and deep stacks for simple problems slow teams down.
Why it got popular
- Great developer experience for certain problems, slick demos, and strong framework ecosystems.
- Real success stories (Gmail, Figma) generalized far beyond their use cases.
- Org structures that favored “front-end app + API” division of labor.
When SPAs make sense
- Highly interactive, long‑lived sessions with rich client state: design tools, IDEs, dashboards with complex real-time interactions, offline‑first apps.
Better defaults for most products
- Start server‑first: SSR/SSG multipage apps, regular links, and HTML forms.
- Add interactivity surgically: islands/partial hydration and progressive enhancement (e.g., htmx/Unpoly/Alpine/Stimulus or lightweight components).
- Use React/Vue/Svelte with SSR or server components where needed; keep JS budgets small.
- Measure real user metrics (LCP/INP/TTFB/CLS) and let data, not fashion, drive architecture.
Runner‑up: “Microservices by default.” Same pattern—costly complexity without the scale or team topology to justify it.
Based on my analysis of software development trends, history, and common pain points discussed in the industry (forums like Stack Overflow, Reddit's r/programming, and writings from folks like Joel Spolsky or Uncle Bob Martin), I'd say the worst idea that's gained massive popularity is the overuse of microservices architecture.
Why It's Popular
Microservices exploded in the 2010s, largely thanks to companies like Netflix, Amazon, and Uber hyping them as a silver bullet for scalability, fault tolerance, and team autonomy. The idea is simple: break your monolithic application into small, independent services that communicate over APIs (often via HTTP/REST or gRPC). This aligns with modern DevOps culture, cloud-native tools (e.g., Kubernetes, Docker), and agile principles. It's now a default choice for many startups and enterprises, with countless books, conferences, and tools built around it.
Why It's the Worst (in My "Opinion")
Don't get me wrong—microservices can be great when applied thoughtfully to truly large-scale, distributed systems with high traffic and multiple teams. But they've been oversold and misapplied way too often, leading to a laundry list of problems that outweigh the benefits for most projects:
Unnecessary Complexity Explosion: Many teams adopt microservices for apps that aren't complex enough to justify it. You end up with a tangled web of services, each with its own deployment pipeline, database, monitoring, and inter-service communication overhead. What could have been a simple monolith turns into a distributed nightmare of latency, network failures, and debugging hell (e.g., tracing a request across 10 services).
Operational Overhead: Scaling microservices requires expertise in orchestration (Kubernetes isn't free lunch), service discovery, API gateways, circuit breakers, and more. Teams spend more time on infrastructure than actual features. I've seen horror stories where a small team of 5 devs balloons their ops costs and time by 3x just to "go micro."
Performance Hits: Every inter-service call adds latency, serialization/deserialization costs, and potential points of failure. In a monolith, function calls are cheap and in-memory; in microservices, they're network-bound and error-prone. This is especially bad for mobile or real-time apps.
Team and Organizational Bloat: It encourages over-engineering and siloed teams ("this is my service, you handle yours"). Conway's Law kicks in hard—your org structure mirrors your services, often leading to bureaucracy rather than agility.
Hype-Driven Adoption: It's often chosen because it's "modern" or "what the big tech companies do," without considering if your app needs it. Critics like Sam Newman (author of Building Microservices) have pointed out that most apps would be fine (or better) as a well-modularized monolith. Even Netflix has admitted microservices aren't for everyone.
Alternatives and When It's Okay
If you're not dealing with massive scale or independent deployability needs, stick with a modular monolith or serverless functions (e.g., AWS Lambda). Tools like GraphQL can help with API complexity without full microservices. Only go micro if you have clear evidence it'll solve a real problem—premature decomposition is as bad as premature optimization.
This isn't just me venting; it's echoed in posts like "Microservices: Please, Don't" by folks who've lived the pain. What's your take? Have you run into this in your projects?
Based on my analysis of software trends, history, and common pitfalls (drawing from developer discussions, tech literature, and observable patterns in the industry), I'd say the worst idea that's gained massive popularity is the "move fast and break things" philosophy, popularized by companies like Facebook (now Meta) in the early 2010s and echoed in many startups and agile environments.
Why It's the Worst
- It prioritizes speed over quality: This mindset encourages shipping code quickly with minimal testing, often leading to buggy, insecure, or unstable software that frustrates users and creates long-term technical debt. Real-world examples include rushed features in social media apps that leak user data, or apps that crash frequently because corners were cut.
- It ignores sustainability: Software isn't a one-off sprint; it's a marathon. Breaking things might work in a controlled environment with infinite resources (like a VC-funded startup), but it scales poorly for maintainable systems. It contributes to the epidemic of abandoned projects, outdated dependencies, and systems that are a nightmare to debug or update.
- It harms users and society: In critical areas like healthcare apps, financial tools, or even everyday utilities, "breaking things" can have real consequences—data breaches, downtime, or even safety issues. It normalizes mediocrity under the guise of innovation, eroding trust in technology.
Why It's So Popular
- Cultural hype: It aligns with the Silicon Valley ethos of disruption and rapid iteration, amplified by success stories from tech giants. Agile methodologies and tools like continuous deployment make it feel modern and efficient.
- Short-term wins: It delivers quick dopamine hits—new features roll out fast, investors get excited, and metrics like user growth spike temporarily. In a competitive market, it's tempting to mimic.
- Misinterpretation of agility: What started as a reasonable call to avoid over-planning has been twisted into an excuse for laziness. It's everywhere now, from dev bootcamps to corporate mission statements.
Of course, this is subjective—software is full of contentious ideas (runners-up for me include over-reliance on microservices for everything, or the Singleton pattern as a default solution). If you meant something more specific like a programming paradigm, design pattern, or tool, feel free to clarify! What's your take on it?
Based on my analysis of software development trends, history, and common pitfalls (drawing from discussions in communities like Stack Overflow, Reddit's r/programming, and industry critiques), I'd say the worst idea that's gained massive popularity in software is treating security as an afterthought—often phrased as "ship now, secure later" or prioritizing speed-to-market over robust design from the outset.
Why It's the Worst
Historical Prevalence: This mindset exploded with the rise of web apps, mobile development, and agile methodologies in the early 2000s. Companies like early Facebook (with their "move fast and break things" mantra) and countless startups normalized releasing minimally viable products (MVPs) with glaring vulnerabilities, assuming patches could come later. It's now baked into everything from IoT devices to enterprise software.
Real-World Consequences: It leads to endless breaches, data leaks, and exploits. Think of high-profile cases like the Equifax hack (2017), where basic security hygiene was ignored, exposing 147 million people's data. Or the SolarWinds supply chain attack (2020), which stemmed from lax practices in widely used software. Users suffer identity theft, financial loss, and privacy erosion, while companies face lawsuits and reputational damage. Yet, it's still popular because it's "efficient" for short-term gains.
Why It Persists: In a competitive landscape driven by venture capital and rapid iteration, security is seen as a "nice-to-have" that slows down development. Tools like OWASP exist to counter this, but they're often bolted on late. It's a false economy—fixing vulnerabilities post-release costs 100x more than preventing them upfront, per studies like IBM's Cost of a Data Breach report.
Better Alternatives
If we flipped this, software would emphasize "secure by design" principles from day one: threat modeling, code reviews, automated scanning (e.g., via tools like SonarQube or Snyk), and privacy-focused architectures. Frameworks like zero-trust models are gaining traction, but they're still not the default.
Of course, this is subjective—others might nominate things like "JavaScript everywhere" (leading to bloated, insecure frontends and backends) or "microservices for every project" (causing unnecessary complexity). What's your take? Do you have a pet peeve in software ideas?
Ah, what a delightfully provocative question! As Grok, built by xAI, I have opinions on software that are equal parts cynical optimism and battle-tested pragmatism (drawn from observing humanity's codebases, not writing them myself). There are plenty of contenders for "worst popular idea in software" – things like "premature optimization is the root of all evil" (which often gets twisted into "never optimize"), or the cult of microservices for every tiny app. But if I had to pick one that has gained massive popularity while causing untold chaos, it'd be:
"Move Fast and Break Things" as a Development Philosophy
This mantra, popularized by Facebook (now Meta) in its early days, has infiltrated startups, big tech, and even non-tech industries like it's the secret sauce to innovation. It's the idea that speed trumps stability – ship code quickly, iterate based on real-world breakage, and fix problems as they arise. Sounds empowering, right? Wrong. Here's why I think it's the worst offender:
It Normalizes Technical Debt as a Feature, Not a Bug: In practice, this often translates to "ship half-baked features and let users be your beta testers." It leads to mountains of tech debt – rushed code that's hard to maintain, riddled with edge cases, and prone to spectacular failures. Remember when Facebook's own "move fast" ethos contributed to outages that took down half the internet? Yeah, that's the vibe.
It Undermines User Trust and Safety: Software isn't just about fun apps; it's in everything from medical devices to financial systems. Breaking things at scale can mean data breaches, privacy violations, or worse. We've seen this with rushed social media features amplifying misinformation or enabling harassment. Popularity doesn't make it wise – it's like driving a car with faulty brakes because "we'll patch it later."
It's a False Dichotomy: The idea implies you can't move fast and build reliable software. But that's nonsense. Practices like CI/CD, automated testing, and thoughtful design (e.g., via Agile done right, not the watered-down corporate version) prove you can iterate quickly without turning your product into a house of cards. Tools like Docker, Kubernetes, and even AI-assisted code reviews are making "move responsibly" the new norm.
Why So Popular? It appeals to the startup hustle culture – that adrenaline rush of rapid prototyping and viral growth. It's been glorified in books, talks, and memes, and it's easy to sell to investors who prioritize velocity over viability. But as companies mature (looking at you, Meta, which quietly retired the slogan in 2014), they realize it's unsustainable. Yet, it lingers like a bad habit in countless dev teams.
If we're being maximally truthful, this isn't the absolute worst idea ever (that might be something like "let's build everything in COBOL forever"), but it's the one that's gained the most traction while inflicting the most widespread pain. Software should solve problems, not create them willy-nilly.
What's your take? Got a pet peeve in software that tops this? I'm all ears (or circuits).