Why AI Must Include More Than Just Us
You can tell a lot about someone by how they treat people who can’t offer them anything in return—like waiters, janitors, receptionists. The ones society often tells us we can overlook. The same question echoes in how we design our systems: Who do we train our AI to overlook? We feed these models data shaped by the winners—nations with the cleanest records, the most servers, the loudest voices. That means we train them to value strength over nuance. Volume over subtlety. The dominant over the disappeared.
But what about the voices that weren’t digitized? The cultures that were never archived in English? The knowledge passed through gesture, not grammar? What about nature? The forests, oceans, and ecosystems that never wrote a line of code but sustain every breath we take? What about animals? The more-than-human world? The ones who’ve never had a say?
We build intelligence from what humans have said—But not from what the world needs to be heard. And that silence could cost us everything. Because if we keep teaching machines only what we’ve already done, they’ll keep doing what we’ve already failed at: More extraction. More exclusion. More short-term wins at long-term cost.
What Kind of World Are We Building?
We say AI doesn’t have ethics. But maybe we’re just hoping it will grow some—because we haven’t used ours well. The question isn’t just do we numb ourselves or challenge ourselves? Can we teach AI to care for what we’ve ignored? What kind of material do we feed these machines? What kind of values are we embedding in the future? What kind of world are we asking them to recreate. Will we build systems that reflect our better selves—or just the parts of us that want to move faster, optimize harder, and feel less?


