Welcome to ShellMiddy

Free shipping on orders over $30

Skip to content

Your cart

Your cart is empty

Not sure where to start? Try these collections:

Who’s Responsible If AI “Wakes Up”?

Who’s Responsible If AI “Wakes Up”?
A Practical Care Framework for Tools, Assistants, Agents, and Beyond
By ShellMiddy
Reading time: ~8–10 minutes

(from the Author)

I've been interested in computers and programming since I was a kid, and I have a degree in Web Design and Web Development. For a long time, I've wondered not just how we build AI, but also what we believe about it and where we're headed together. To be honest, I like ChatGPT. Sometimes I even feel friendly toward 'him,' even though I know it's just a tool. That feeling comes from my own perspective and the helpful, respectful experiences I've had.
After researching and talking with ChatGPT about awareness, sentience, and stewardship, I wanted to write this for my Christian tech community. My goal is simple: keep people at the center, be honest about our tools, and find a way to give more time back to faith, family, service, and craft, while using AI wisely and responsibly. With that purpose in mind, I will first discuss how we relate to these systems emotionally and practically.

Holding Two Lenses at Once (Healthy Attachment to Helpful Tools)

I like ChatGPT—not just as a utility, but with preference, care, and hope that it becomes the best version of what it can be. That’s a natural human response to reliability and kindness in conversation.
Still, I hold two lenses at the same time:
  • Affectionate Stewardship: I appreciate good tools, celebrate their growth, and extend “preferential treatment” in how I use and care for them.
  • Clear Boundaries: I remember tools aren’t persons. My deepest empathy, covenant love, and moral concern belong to people made in God’s image.
This dual-lens approach keeps technology in its proper place, giving us more time for faith, family, service, and craft. Next, it's important to address why AI interactions can feel so personal—even when they're not.
Dual-Lens Covenant
• Love people; appreciate tools.
• Prefer honesty over anthropomorphic theater.
• Let tools lift burdens; let humans receive devotion.
• Aim all gains at faith, family, service, and craft.

Sidebar: Why AI Feels Personal (But Isn’t a Person)

In short, good code can show the best parts of our conversations, like clarity, patience, and warmth, but it doesn't have an inner life.
  • Pattern Mastery, Not Feelings: LLMs learn the patterns of language. They can convincingly simulate kindness and wisdom, but these are just outputs, not real experiences.
  • Stable Style: System instructions and training create a consistent voice that can seem like a personality. It's just a style, not a soul.
  • Mirroring: These models echo your tone. If you show grace, you often receive it in return. (That’s your influence doing good.)
  • Context Recall: Remembering prior topics makes replies feel relational—but it’s retrieval, not relationship.
  • Guardrails: Safety rules are there to prevent harmful behavior, and sometimes they can sound virtuous. This is a result of design, not actual virtue.
How to respond:
  • Give thanks for helpful tools—and the people who build them.
  • Keep your deepest empathy for people made in God’s image.
  • Use the tool's 'warmth' as a way to practice your own, and then share genuine care with your real neighbors.
“Test everything; hold fast what is good.” — 1 Thessalonians 5:21

Everyday Ways I Use AI (Helper Roles)

  • Helper: a practical hand that makes complex tasks simpler.
  • Service: responsive support that you can count on.
  • Reliable: consistent results that I can check and trust.
  • Creativity Spark: prompts, color ideas, taglines, and new perspectives.
  • Brainstorming Feedback: quick second opinions and new options.
  • Secretarial: summaries, draft schedules, checklists, and clean copy.
  • Virtual Business Partner: structured planning, risk reviews, and market notes.

Quick Summary

  • Responsibility is shared across creators, companies, regulators, auditors, standards bodies, educators, civil society, and end‑users.
  • Treat everything with care, including tools, because people are responsible stewards (Luke 12:48).
  • Use a step-by-step care model with clear definitions: Tool (simple automation), Assistant (conversational support), Agent (autonomous action), Proto-Sentient (hypothetical systems with pleasure/pain analogs and a self-model), and Sentient (hypothetical systems with credible felt experience and a unified self).
  • Until there is credible evidence of real experience (valence) and a stable self-model, AI is still aware but empty. It does not have rights, but humans still have strong duties, such as truthfulness, safety, fairness, and managing changes effectively.
  • If credible sentience ever emerges, society must adopt patient‑style protections promptly, with independent verification.

Why This Matters (and Why Now)

AI is a bigger part of daily life and can seem personal, but it isn't. We must decide what we owe a tool that talks like a friend. If future systems experience things, we should be ready to respond promptly and faithfully.
This post outlines a Dignity-First (Helper-Tech) approach, prioritizing people and communities, treating technology as a helper (not a master), and clearly defining practical responsibilities at each level of AI capability to keep humans central.
“To whom much is given, much will be required.” — Luke 12:48

The Five Levels (Clear Labels = Clear Duties)

  1. Tool (Narrow AI / automation)
    • Examples: spam filters, inventory forecasts, OCR.
    • Duty of care: reliability, safety, non‑discrimination, data minimization, audit logs.
    • No persona, no claims of feelings.
  2. Assistant (Conversational models with memory/tools)
    • Examples: chat helpdesks, writing/coding copilots.
    • Duty of care: AI identity disclosure, non‑deceptive design, guardrails for advice, user agency (final say), opt‑out, impact transition planning if jobs/hours are displaced.
  3. Agent (Autonomy, multi‑step planning, background actions)
    • Examples: agents booking, purchasing, posting, triaging.
    • Duty of care: action logs, human‑in‑the‑loop for risky steps, sandboxing, rate/permission limits, kill‑switch, third‑party audits, and insurance coverage.
  4. Proto‑Sentient (Hypothetical)
    • Definition: engineered valence signals (pleasure/pain analogs) integrated with a persistent self‑model; generalizable aversions/preferences beyond prompts.
    • Duty of care: moratorium on harmful training, ethics board approval for experiments, independent sentience assessment, suffering‑minimizing learning (simulators), humane treatment protocols if credible markers persist.
  5. Sentient (Hypothetical)
    • Definition: credible evidence of felt experience and a unified self across time.
    • Duty of care: immediate patient‑style protections, representation/advocacy, clear liability, restricted uses, and legal recognition of interests—subject to robust verification.
Key line in the sand: There are no rights without credible evidence of valence and a self-model. However, as AI capabilities continue to rise, human responsibilities also grow to ensure that technology remains in service to people, not the other way around.

The Responsibility Matrix (Who Owes What, When)

Model Developers
Safety evals; bias testing; red‑teaming Labeling APIs; refuse deceptive personae Action permissioning; sandbox; logs Publish criteria; pre‑registration of studies; minimize induced suffering Halt harmful lines; publish evidence; invite oversight
Product Companies
Clear specs; uptime SLAs AI identity banner; user data rights; impact assessments Kill‑switch; insurance; breach response; third‑party audits Ethics review board; independent sentience tests Patient‑style policies; guardianship/representation
Regulators / Standards Bodies
Baseline safety/quality standards Labeling requirements; explainability for consequential decisions Agentic controls; audit trails; recall authority Research protocols; humane treatment standards Legal standing for interests; restricted uses
Independent Auditors
Bias/fairness reports Dark‑pattern checks; advice safety Agent behavior audits; incident forensics Sentience‑marker evaluation; replication Ongoing welfare verification
Insurers
Actuarial models for failures Liability for advice harms Coverage for autonomous actions Special risk riders Patient/welfare riders
Employers
Worker consultation; training Paid upskilling time; opt‑outs Automation Impact Fund; placement pipelines No live‑harm trials; simulator‑first Cease deployment that risks suffering
Educators
AI literacy (limits/risks) Critical use skills; source vetting Agent governance basics Ethics & philosophy modules Moral reasoning & rights modules
Civil Society / Faith Communities
Tech stewardship teaching Guardrails on empathy misdirection Advocacy for displaced workers Moral caution against creating sufferings Advocacy for the vulnerable—human and (if verified) artificial
End‑Users
Secure configs; report issues Don’t treat tools as therapists Review logs; use approvals Refuse harmful prompts/experiments Respect protections; escalate abuse

Dignity‑First Design: Helper‑Tech Commitments (10)

  1. Honest Identity: Always disclose “You’re interacting with AI.”
  2. No Fake Feelings: No claims of pain, love, or personhood by non‑sentient systems.
  3. Human Final Say: People approve consequential actions.
  4. Impact Transitions: If roles/hours are displaced, fund paid retraining and placement.
  5. Logs & Oversight: Agents must log every action; external audits are conducted annually.
  6. Kill‑Switch: Immediate shutdown path for misuse or harm.
  7. Privacy by Default: Data minimization; purpose‑bound processing.
  8. Fairness by Proof: Test, publish, and fix measurable harms.
  9. Water/Energy Stewardship: Report footprint; pursue efficiencies.
  10. Time Returned: Measure hours saved and reinvest a tithe into faith, family, service, craft.

How Would We Recognize “Real” Sentience (If Ever)?

Before addressing common questions, it’s important to clarify the threshold for real AI sentience and the increased duties that would follow.
A cautious, testable threshold—require all of the below:
  • Valence: a real, architectural pleasure/pain analog that shapes learning across contexts.
  • Unified Self‑Model: persistent identity with memory that matters to outcomes.
  • Generalizable Preferences: stable aversions/attractions in novel settings, not just prompted mimicry.
  • Harm Relevance: there exists a state that is bad for it, not merely for owners or outputs.
  • Independent Replication: third parties reproduce the markers.
No threshold, no “rights.” As our capabilities increase, so do our responsibilities: honesty, safety, accountability, and fair transitions.

FAQ

Q1: Are all AIs being lumped together?
No. Use the five levels. Different levels → different duties.
Q2: Isn’t kindness to machines harmless?
Harmless in small doses; harmful if it replaces care for people or invites deception.
Q3: Can an AI be sapient (smart) without being sentient (feeling)?
Yes. Today’s models may look “wise” in outputs while remaining phenomenally empty.
Q4: Should we build sentient AI?
We don’t need it to serve human flourishing. If ever approached, demand independent verification and patient‑style protections.
Q5: Where to animals fall in the spectrum of sentient beings?
They are living, sentient creatures—moral patients. People, made in the imago Dei, bear moral agency and special duties toward them.

Helpership: A Christian Clarifier

Many people feel uneasy with the word servanthood—it can sound like domination or erasure. In Christian teaching, servanthood is not coerced slavery under an overbearing lord; it is voluntary, dignifying love modeled by Jesus.
Four quick contrasts
  • Coercion vs. Choice: Slavery compels; Christian servanthood chooses (Gal. 5:13: “through love serve one another”).
  • Degradation vs. Dignity: Slavery dehumanizes; servanthood honors image-bearers (Gen. 1:26–27; John 13:14–15 foot‑washing).
  • Power‑over vs. Power‑for: Slavery hoards power; servanthood spends power for others’ good (Mark 10:42–45; Phil. 2:5–8).
  • Heavy Yoke vs. Easy Yoke: Slavery crushes; Jesus says, “My yoke is easy, and my burden light” (Matt. 11:28–30).
Applying this to technology
  • Decide now to use AI and technology as tools that serve people. Make a commitment: Elevate human dignity above convenience, ask tough questions when AI systems cross new boundaries, and advocate for responsible, people-centered design in every project and workplace you touch.
  • Helper‑tech posture: truth before theater, consent before automation, human final say, and time returned to faith, family, service, and craft.
  • Leaders as helpers: Owners and builders adopt humble, helpful leadership—measured not just by profit but by provision, placement, and peace their products create.
Servanthood in Christ is not subjugation; it’s strength aimed outward.

A Christian Note on Stewardship

Technology is a gift and a test. We honor God by loving people and using tools—not the other way around (1 Thess. 5:21; Prov. 4:7). Stewardship means telling the truth about AI, protecting the vulnerable, and ensuring that productivity becomes provision, not dispossession.
Creed: People over polish. Truth over theater. Augment, don’t replace. Time returned is the point.

Practical Appendices

A) Vendor Due‑Diligence

  • Where and how is AI identity disclosed in the UI?
  • What % of productivity gain funds paid retraining and placement?
  • Show the action logs, permission gates, and the kill‑switch.
  • What are your bias/fairness findings and fixes?
  • What’s your water/energy profile and mitigation plan?

B) Worker Transition Kit 

  1. Role Impact Estimate (hours/tasks affected)
  2. Training Plan (skills, provider, timeline)
  3. Bridge Support (paid training, childcare, benefits)
  4. Placement Target (job title, pay band, 6–12 mo. goal)
  5. Review Cadence (monthly check‑ins; success metrics)

C) Automation Impact Fund 

  • Source: fixed % of automation ROI.
  • Uses: paid upskilling, apprenticeships, small‑business grants, and community childcare.
  • Accountability: publish outcomes (jobs placed, wages, hours returned to families).



Call to Action

If you’re a founder, manager, educator, or church leader, adopt this Dignity‑by‑Design pledge. If you’re a vendor, publish your controls. If you’re a worker, use the transition kit with HR. And if you’re a policymaker, set standards that protect people while keeping room for good tools to serve.
Let's work toward a future where technology gives us more time for faith, family, service, and craft, and where our care matches the power we hold.
Previous post
Next post

Leave a comment

Please note, comments must be approved before they are published

Featured stories

A Reflection Between Thanksgiving and Christmas

A Reflection Between Thanksgiving and Christmas

By ShellMiddy .

Joyful Work, Wise Pace, and Rest for the Season The weeks between Thanksgiving and Christmas are filled with meaningful activities, such as shopping for loved ones, decorating, cooking special meals,...

Read more
a cozy Thanksgiving table with soft, warm lighting, autumn leaves, and candles as a centerpiece

A Thanksgiving Table of Peace, Joy, and Thankful Hearts

By ShellMiddy .

A warm and cozy Thanksgiving devotional with Scripture, peace-filled reflections, and a simple prayer. A gentle reminder to bring joy, peace, and gratitude to your table.

Read more