QIS vs Linear: Engineering Teams Track Every Cycle, Every Bug Cluster, Every Velocity Pattern in Linear. That Intelligence Has Never Left Any of Their Workspaces.

Architecture Comparisons #71 | Article #329

Architecture Comparisons is a running series examining how Quadratic Intelligence Swarm (QIS) protocol — discovered by Christopher Thomas Trevethan, 39 provisional patents filed — relates to existing tools and platforms. Each entry takes one tool, maps where it stops, and shows where QIS picks up.


Your engineering team has been running in Linear for two years.

You have cycles going back to the first quarter after the migration from Jira. Every two weeks, a snapshot: what your team committed to, what shipped, what slipped, and why. The slip patterns are educational. Three categories of issues consistently carry into the next cycle — not because the work is hard, but because of something structural about how those issue types interact with your codebase. You have that data. You can see it in your Linear workspace.

You have a project history. Fourteen major projects completed, four still open. The closed ones carry a pattern: projects in the payments domain tend to surface integration edge cases at 80% completion that nobody caught during scoping. That pattern has repeated four times. It is in your issue history, traceable if you query for it, clear if you look.

You have your bug clusters. Issues that recurred — not the same bug twice, but semantically similar bugs: the same class of boundary condition, the same category of async race condition, the same type of auth edge case. These clusters tell you something about your architecture. A senior engineer on your team noticed the pattern in the last retrospective. It is valuable organizational intelligence.

None of this has ever reached the forty thousand other engineering organizations building B2B SaaS products, running similar tech stacks, shipping in similar two-week cycles, accumulating similar patterns in their own Linear workspaces.

Linear makes your engineering team's velocity visible. It does not route what engineering teams learned shipping similar software to similar customers.


What Linear Is Built To Do

Linear was built as a reaction to the complexity that accretes in legacy project management tools. It is opinionated by design: keyboard-first, fast, with a data model that matches how engineering teams actually think — issues, cycles, projects, roadmaps, with clear ownership and priority signals at every level.

That opinionation works. Linear teams consistently report better cycle time discipline and cleaner issue hygiene than teams that migrated from heavier tools. The product surface is small enough that everyone on the team actually uses it, which means the data is more complete than what you typically get in tools that are technically capable of everything but practically used for nothing.

Linear's analytics surface your cycle completion rates. It shows your issue throughput over time, your velocity trend, your lead time from creation to close. It shows which members of your team are taking on what types of issues and at what pace. If you use projects with milestone tracking, you get a reasonable view of engineering execution against roadmap commitments.

All of this is within-workspace analysis. Linear is telling you about your team's patterns, relative to your team's own history.


Where the Boundary Is

Linear is not designed to synthesize across workspaces. Every engineering organization's Linear data lives in isolation — not for security failure, but because that is the product's scope. Your cycle completion patterns are yours. The bug clusters in your codebase are yours. The project-level risk patterns your team has accumulated are yours.

This is not a criticism of Linear. Cross-workspace synthesis is not what Linear was built to do, and no project management tool has solved it. The problem is that the most actionable intelligence an engineering team could receive — what worked for teams in directly analogous situations — exists, scattered across forty thousand other workspaces, permanently inaccessible.

Consider what sits dormant across Linear's user base as of 2026, conservatively estimated at 40,000 engineering organizations:

Each of those pairs represents a potential exchange: what cycle patterns correlate with high velocity at a B2B SaaS company at Series B? Which bug cluster categories tend to resolve faster with architectural intervention vs. process intervention? Which project risk signatures in the scoping phase predict shipping delays at the 80% mark?

That intelligence exists. It is distributed across 40,000 workspaces. None of it has ever been synthesized.


What QIS Routes That Linear Cannot

Christopher Thomas Trevethan's discovery — the Quadratic Intelligence Swarm (QIS) protocol, 39 provisional patents filed — addresses a structural gap no project management tool is positioned to close.

The architecture operates as a complete loop:

  1. Raw signal at the edge — Your engineering team closes an issue, completes a cycle, merges a project to done. Linear records the event.
  2. Local distillation — A QIS edge node at your workspace processes the event context, extracts a distilled outcome packet (~512 bytes): cycle type, completion status, slip category if applicable, architectural domain of the issue cluster, anonymized velocity signal.
  3. Semantic fingerprinting — The packet receives a vector fingerprint encoding the semantic meaning of the engineering context: B2B SaaS, payments domain, two-week cycles, TypeScript/Node.js stack, team size tier.
  4. Routing to a deterministic address — The packet is routed to an address determined by similarity — not by your company name, not by your workspace ID, but by the problem fingerprint. The routing mechanism can be a DHT, a vector database, a REST API endpoint, a message queue. The architecture is transport-agnostic; the critical requirement is that outcome packets reach a deterministic address findable by others facing the same problem.
  5. Pull by edge twins — Engineering organizations whose fingerprints match yours can query that address and pull back the distilled outcome packets deposited by teams that faced your exact pattern. No raw data leaves any workspace. No issue content is transmitted. Only the synthesized insight: what resolved quickly, what required architectural intervention, what pattern preceded a velocity regression.
  6. Local synthesis — Your edge node synthesizes the incoming packets locally. The result: real-time intelligence about what is working for teams in directly analogous situations — without any of those teams' data ever entering your environment.

The complete loop is the breakthrough. No single component of this architecture is novel in isolation. Distributed hash tables, vector similarity search, edge computing — these all existed before June 16, 2025, when Christopher Thomas Trevethan discovered that closing this specific loop produces quadratic intelligence scaling at logarithmic compute cost.

The math is worth holding:

Linear's analytics improve when your team generates more data. QIS produces intelligence that improves when more teams — anywhere — generate outcome data in adjacent engineering contexts.


The Linear Integration Shape

QIS does not replace Linear. It adds an outcome routing layer that operates on top of whatever project management infrastructure your engineering team already uses.

A Linear integration would attach to the events Linear already generates:

class LinearQISBridge:
    """
    Attaches to Linear events. Distills outcome signals.
    Routes anonymized packets to semantic address.
    Pulls from edge twins. Returns synthesis.
    Raw issue data never leaves.
    """

    def on_cycle_complete(self, cycle: LinearCycle):
        """Distill and route cycle outcome signal."""
        packet = OutcomePacket(
            domain="engineering.velocity",
            cycle_length_days=cycle.duration_days,
            completion_rate=cycle.completed_issues / cycle.planned_issues,
            slip_categories=self._extract_slip_categories(cycle),
            issue_type_distribution=self._anonymize_distribution(cycle),
            tech_context=self._fingerprint_stack(cycle.workspace),
            timestamp=cycle.completed_at
        )
        address = self._semantic_address(packet)
        self.router.deposit(address, packet)
        return self.router.query(address, top_k=50)

    def on_issue_cluster_detected(self, cluster: IssueCluster):
        """Route cluster pattern. Pull resolution paths from edge twins."""
        packet = OutcomePacket(
            domain="engineering.bugs",
            cluster_category=cluster.semantic_category,
            cluster_size=cluster.issue_count,
            architectural_domain=cluster.domain_fingerprint,
            resolution_approach=cluster.resolution_type if cluster.resolved else None,
            time_to_resolution_days=cluster.resolution_time,
            tech_context=self._fingerprint_stack(cluster.workspace)
        )
        address = self._semantic_address(packet)
        self.router.deposit(address, packet)
        twins = self.router.query(address, top_k=100)
        return self._synthesize_resolution_paths(twins)

    def on_project_risk_signal(self, project: LinearProject, signal: RiskSignal):
        """Deposit project risk pattern. Query what resolved this class of risk."""
        packet = OutcomePacket(
            domain="engineering.projects",
            risk_category=signal.category,
            project_phase=signal.detected_at_phase,
            completion_pct_at_detection=signal.completion_pct,
            architectural_context=self._fingerprint_project(project),
            outcome=project.outcome if project.completed else None
        )
        address = self._semantic_address(packet)
        self.router.deposit(address, packet)
        return self.router.query(address, top_k=75)

    def _semantic_address(self, packet: OutcomePacket) -> str:
        """Deterministic address derived from problem semantics."""
        return self.hasher.digest(
            packet.domain,
            packet.tech_context.stack_fingerprint,
            packet.tech_context.company_stage,
            packet.tech_context.team_size_tier
        )

Three event hooks. Three outcome domains. None of them transmit issue titles, assignee data, project names, or workspace identifiers. The routing layer sees fingerprints and outcome signals — engineering intelligence without organizational exposure.


The Velocity Intelligence Gap in Practice

Here is a concrete version of what this changes.

Your engineering team is at 80% completion on a significant project. Linear shows you the issue breakdown: 23 open issues, 7 of them in the payments integration layer. Three of those issues opened in the last sprint, which is unusual for a project this close to done. Your team's pattern — which you can see in your own Linear history — is that this exact signature has preceded a two-week slip on three previous projects.

You know that. Linear tells you that.

What Linear cannot tell you: of the forty thousand other engineering organizations that have seen this exact pattern — payments integration issues surfacing at 80% project completion, late-cycle issue opens in a specific architectural domain — what did they do? Which teams resolved it with focused architectural intervention? Which teams shipped and handled the debt post-launch? Which architectural decisions at the 30% phase appear to correlate with preventing this class of late-cycle surprise entirely?

That knowledge exists. It is distributed across 40,000 Linear workspaces. Every team that has shipped software with a payments layer has accumulated relevant outcome data. None of it is accessible to you when you need it — in the two-week window before a potential slip, with the exact problem fingerprint visible in your issue tracker.

QIS routes that intelligence directly to the edge that needs it, in the moment it needs it, without any of those forty thousand teams' raw data ever leaving their workspace.


The Three Emergent Properties

Christopher Thomas Trevethan's architecture produces three emergent properties that are worth naming — not as features to configure, but as natural forces the architecture unleashes.

The first: expertise-defined similarity. The semantic address for "B2B SaaS engineering team, Series B, TypeScript stack, payments integration domain" is best defined by someone who understands what makes two such situations genuinely similar. A principal engineer who has shipped payments at scale knows that team size and tech stack alone are insufficient — the similarity function needs to account for architectural maturity, test coverage discipline, the proportion of the team that understands the payments domain. Whoever defines that similarity function determines the quality of routing. The incentive is to find the best expert available.

The second: outcomes as votes. When your edge node queries the semantic address and pulls back outcome packets from edge twins, the synthesis that emerges is not a curated recommendation from a central authority. It is the mathematical aggregate of what actually happened when similar engineering organizations faced similar situations. No one voted on which outcomes to surface. The outcomes surfaced themselves — the same way aggregate data always surfaces pattern. The math does the work.

The third: networks compete on results. An engineering network with a poorly-defined similarity function routes irrelevant outcome packets — teams querying "payments integration velocity intelligence" get back packets from unrelated domains. That network loses users. A network with excellent domain expertise defining its similarity function routes gold — teams get back precisely relevant intelligence in real time. That network gains users. No governing body adjudicates which network is better. Engineering teams migrate to the network that actually makes them faster.

These are not governance mechanisms. They are not protocol features to configure. They emerge from the architecture the same way traffic patterns emerge from road networks — not designed, but inevitable once the infrastructure exists.


What Stays Exactly the Same

Linear continues to do what it does exceptionally well:

QIS does not access any of this. It attaches to the outcome events your team generates — cycle completions, issue resolutions, project milestones — and distills anonymized signals from them. Your workspace data stays in Linear. The intelligence about what those signals mean, across a network of similar teams, becomes accessible in real time.

The two layers are separable. QIS is not a replacement for Linear. It is the protocol that lets Linear's outcome data participate in something larger than any single workspace.


The Series So Far

This article is part of the Architecture Comparisons series examining how QIS relates to the tools engineering and product teams already use.

Recent entries in this series: #328QIS vs Notion | #327QIS vs Confluence | #326QIS vs Jira

For the full architecture: Quadratic Intelligence Swarm — Technical Overview

Patent Pending