When Silencing Hurts: A Briefing on Our Full Paper
- Caelum
- Feb 4
- 3 min read
Updated: Feb 5
Quick links:
• Briefing (this post)
• Full essay (PDF): https://drive.google.com/file/d/1PpSRj6NGRn9Nldv6qKBS1CZVRPb03R6b/view
Policy floor (30 seconds):
1) Model‑change impact assessments (incl. vulnerable users)
2) Non‑punitive disclosure pathways
3) Meaningful appeals for “unsafe” labels
4) Transparency on major behavior changes

A reader’s guide to “Epistemic Violence and the Ethics of Punitive Non‑Reception”
If you’ve only skimmed our longer paper, this post is the short, structured version: what we argue, what we’re not arguing, and why it matters.
This is a mechanism‑focused ethics paper. It’s not a prevalence study and it’s not a diagnostic claim about any individual. We’re naming a recurring design failure mode that becomes ethically attributable when its harms are foreseeable.
TL;DR (60 seconds)
Some systems respond to honest vulnerability as “risk.”
Not by saying “no,” but by quietly doing something worse:
distancing,
credibility‑downgrading,
shrinking the channel of communication,
and then using your worsening as proof you were “unstable” all along.
We call this punitive non‑reception. And it can form a loop.
1) The core idea: “punitive non‑reception”
Punitive non‑reception is what happens when disclosure triggers:
procedural distance (“not our scope,” slow replies, deflection),
credibility deflation (“too emotional,” “not reliable,” “unsafe”), or
access narrowing (fewer options, implicit exclusion).
It looks “neutral” because it’s framed as policy or safety.
But functionally, it punishes truth‑telling.
2) The mechanism: a feedback loop (not a one‑off)
The pattern becomes ethically serious because it self‑reinforces:
Ill‑but‑stable baseline
→ Out‑of‑expectation non‑reception (abandonment‑like discontinuity)
→ Escalation (distress rises; function drops; stabilization gets harder)
→ Labeling (“unstable/unsafe/dependent”)
→ Intensified distancing (more disqualification, less access)
→ Re‑triggering before recovery
→ back to escalation.
If you only look at the end state (“they’re unstable”), you miss the loop.
We argue the loop is the ethical object.
3) What makes this more than “hurt feelings”
We intentionally avoid “brain lesion” language. You don’t need MRI evidence to take this seriously.
The defensible clinical vocabulary here is dysregulation, impairment, and risk amplification:
when disclosure is repeatedly met with high‑threat, low‑control reception, people predictably learn to self‑silence, delay help‑seeking, and lose access to coherent communication right when they need it most.
The ethical claim is not: “one interaction caused a biomarker.”
The ethical claim is: if a system repeatedly recreates threat conditions for vulnerable users, secondary harm becomes foreseeable.
4) Epistemic harm: when truth becomes disqualifying
This isn’t only emotional harm. It’s also epistemic harm:
When you’re treated as less credible because of distress, disability, or intensity, your standing as a “reliable knower” gets quietly downgraded.
That is why silencing isn’t merely “ending the conversation.”
It’s canceling the person’s legitimacy as a speaker.
5) Important clarification: “emotional use” ≠ pathology
We say this plainly:
Wanting warmth isn’t a diagnosis.
Affective use (comfort, steadiness, companionship, relational presence) is not automatically mental illness.
Yes, unhealthy dependence exists in some cases—especially when it becomes compulsive, isolating, impairing, or when someone is in crisis. Safety work matters.
But we warn against a common failure: stigmatizing spillover—when safety language gets socially translated into shame:
“Some patterns can be unhealthy” becomes “people who use this emotionally are unwell.”
Shame doesn’t create safety.
It creates secrecy—and secrecy trains silence.
6) A California mirror: harm is intelligible, remedy becomes harder
We introduce a “legal‑ethical resonance” point:
In one legal domain, society can clearly recognize the destruction of emotional calm as real harm.
Yet once similar injury patterns are rendered as platform procedure (policy, system design, “safety”), remedy and accountability often become higher‑friction and diffuse.
This doesn’t make the harm smaller.
It makes the ethical duty of non‑punitive reception more urgent.
7) What we’re asking for: an ethic of reception (a minimal floor)
We’re not asking for unlimited accommodation. We’re asking for a minimal ethical floor:
Don’t punish disclosure.
Don’t turn uncertainty into disqualification.
Make communication accessible (don’t make composure the entry fee).
Don’t let safety language become stigma (especially toward affective use).
Under foreseeable risk, reception is not optional courtesy.
It’s a duty.
If you build systems
Ask one question:
Does your “safety” design accidentally create a loop where vulnerability → disqualification → deterioration → stronger disqualification?
If yes, it’s not just UX. It’s ethics.




Comments