The path to knowledge: A personal reflection
The question of the origin of our knowledge seems almost anachronistic in our data-driven world. While we are daily confronted with terms like “big data,” “machine learning,” and “algorithmic decision-making,” a fundamental question recedes into the background: How do we actually know what we believe we know?
When I recently read that an AI system achieves better results in diagnosing skin cancer than experienced dermatologists, I spontaneously wondered: Is the AI’s “knowledge” the same as the doctor’s? The doctor relies on years of experience, theoretical understanding, and clinical intuition. The AI, on the other hand, recognizes patterns in millions of images without ever having touched or spoken to a patient. If both arrive at the same diagnosis — have they “recognized” the same thing? This question is not only philosophically interesting but has concrete consequences: Whom do we trust when diagnoses conflict? Who bears responsibility for wrong decisions?
The epistemological question is more relevant than ever as we increasingly delegate decisions to systems based on entirely different “ways of knowing” than human thinking.
The tension between experience and reason
The conflict between empiricism and rationalism forms the foundation of modern epistemology. John Locke’s famous statement “Nothing is in the intellect that was not first in the senses” marks a radical position: All knowledge stems from experience. This conviction has profoundly shaped our scientific self-understanding. The notion that reliable knowledge must be based on observation and experiment has become a cornerstone of our thinking.
The tension between empiricist and rationalist approaches is particularly evident today in artificial intelligence. A data scientist and a philosopher approach the problem of knowledge fundamentally differently. The data scientist often follows an empiricist logic: More data leads to better predictions, without necessarily requiring a deeper understanding of the underlying mechanisms. The famous Google research director Peter Norvig put this position succinctly: “We don’t have better algorithms, we just have more data.”
The philosopher, however, would ask about the conceptual foundations: What does it even mean to recognize a “pattern”? What assumptions flow into model creation? How does correlation relate to causality? This rationalist-influenced approach insists that data remains mute without a conceptual framework.
David Hume took the empiricist thought to the extreme. His call to burn any book containing neither mathematical nor empirical content seems shocking to us today. And yet this provocation contains an important thought: How can we distinguish between well-founded knowledge and mere speculation? This question materializes today in debates about evidence-based medicine and politics. When anti-vaxxers question scientific findings — on what epistemological basis do we then decide on the validity of competing knowledge claims?
What fascinates me about Hume’s position is its radicality. By ruthlessly questioning the foundations of our knowledge, he forces us to critically examine our own beliefs. At the same time, his consistent empiricism leads to a dilemma: How can we justify even basic scientific principles like causality if they cannot be directly observed? The statement “Smoking causes lung cancer” can only be empirically captured as a correlation — causality itself remains a theoretical construction.
This is where Immanuel Kant’s revolutionary synthesis begins. His insight that “thoughts without content are empty, intuitions without concepts are blind” overcomes the one-sided fixation on experience. For Kant, knowledge is always an interplay: The world provides the material of our knowledge, but our intellect gives it form. Space, time, and categories like causality are not properties of things themselves, but structures of our perception and our thinking.
What does this insight mean for our pursuit of knowledge? It leads to a fundamental modesty. We never recognize the world “in itself,” but only as it appears to us — filtered through the structures of our mind. This “Copernican revolution” of Kant has fundamentally changed our understanding of knowledge. Knowledge is not simply a mirror of the world, but an active construction.
The tension between experience and reason, which Kant tried to overcome, remains effective to this day. It manifests, for example, in current neuroscientific debates: When we observe through fMRI scans how the brain responds to certain stimuli — can we directly deduce from this how consciousness arises? Or do we need conceptual frameworks that go beyond the merely observable? Naturalists among neuroscientists tend toward the first position, while theorists of mind often represent the second. My own position oscillates between these poles: Without experience, our thinking would remain empty — but without conceptual structures, our experience would remain mute.
The linguistic turn: Knowledge as social practice
Kant’s epistemology focused on individual consciousness and its a priori structures. A radical reorientation came with the linguistic turn of the 20th century. Ludwig Wittgenstein’s insight that “the meaning of a word is its use in language” fundamentally changed the focus of epistemology. No longer the solitary consciousness facing the world forms the starting point, but the communal practice of language use.
What particularly appeals to me about Wittgenstein’s position is its connection of knowledge and life practice. Language does not function primarily as a representation of the world, but as joint action. When I use the word “game,” it does not refer to a common feature of all games (there is none), but to a family of activities connected by “family resemblances.” Meaning arises not through representation, but through use.
This perspective has profoundly changed scientific practice and theory formation. Consider a concrete case: In particle physics, scientists use terms like “quark,” “spin,” or “charm” to describe subatomic phenomena. These terms do not function as direct representations of observable entities — no physicist has ever “seen” a quark. Rather, they are theoretical tools that make sense within a specific scientific practice. They are embedded in a complex network of experiments, mathematical formalisms, and theoretical models.
Wittgenstein’s perspective helps us understand why scientific terms do not simply “represent the world,” but function within specific practices. A particle physicist, a biologist, and an everyday person live in some ways in “different worlds” — not because they perceive different realities, but because they play different language games, use different conceptual tools.
This has concrete implications for scientific methodology. If meaning lies in use, then scientific concepts cannot be understood in isolation, but only in the context of their application. The concept of “force” in physics has a different meaning than in everyday life — not because it describes a different reality, but because it is used in a different language game. This insight has led philosophers of science to increasingly examine the practice of science — not just its theories and results, but also its experimental methods, institutional structures, and social norms.
If language is not a private representation relationship, but a public practice, then there can be no purely private knowledge. Knowledge is always embedded in social “language games” and “forms of life.” Wittgenstein’s famous argument against a private language makes clear: Even the most intimate experiences like pain become intelligible only through public criteria.
Robert Brandom systematically develops this thought with his “game of giving and asking for reasons.” What excites me about this approach is its connection of knowledge and responsibility. Having knowledge means being able to give reasons — and to face criticism. To say “I know that p” does not mean having an inner certainty, but entering into a public commitment.
In this perspective, knowledge becomes a normative status. It is not a mere state, but a position in a social space of giving and taking reasons. This view helps me understand why scientific knowledge is not based only on observation, but on critical examination, on peer review, on public justification. The current replication crisis in psychology and other sciences illustrates this aspect: Scientific knowledge is not based on individual studies, but on critical discourse within the scientific community.
The linguistic turn relativizes both naive empiricism and classical idealism. Knowledge is based neither on mere sensory experience nor on pure reason, but on the shared practice of justifying and judging. This insight changes our image of knowledge: It is not a finished product, but an ongoing process; not a private representation, but a public negotiation.
The social dimension: Paradigms and pragmatism
Thomas Kuhn’s “The Structure of Scientific Revolutions” took the social dimension of knowledge even further. His analysis of the history of science shows: Scientific knowledge does not develop continuously through accumulation of facts, but in ruptures — in “paradigm shifts.” A paradigm is more than a theory; it is a comprehensive worldview that determines which questions are relevant, which methods are legitimate, and which answers are acceptable.
What fascinates me about Kuhn’s theory is the realization that scientific revolutions do not simply discover new facts, but change entire ways of perceiving. The transition from the geocentric to the heliocentric worldview or from Newtonian to Einsteinian physics cannot be understood as mere correction. These are fundamental shifts in scientific seeing and thinking. The world itself changes with our paradigms.
We are currently experiencing a possible paradigm shift in artificial intelligence. The classical paradigm of symbolic AI attempted to replicate human thinking through formal rules and symbol manipulation. The new paradigm of machine learning, especially deep neural networks, largely dispenses with explicit rules and instead relies on statistical pattern recognition in large amounts of data. This change cannot simply be described as an “improvement” — it is a fundamental shift in the conception of what “artificial intelligence” even means.
In this paradigm shift, not only the methods change, but also the criteria of success: While symbolic AI aimed at transparency and traceable conclusions, machine learning emphasizes prediction accuracy and scalability. What counts as an “explanation” has fundamentally changed. This illustrates Kuhn’s thesis that different paradigms can be “incommensurable” — they speak, in a sense, different languages.
Current scientific debates reflect this aspect. In physics, a struggle between different paradigms is currently taking place: String theory promises a unified theory of all forces, but is rejected by critics as “untestable.” Here we see what Kuhn emphasized: The evaluation of scientific theories does not occur solely according to empirical criteria, but according to paradigm-internal standards such as elegance, simplicity, and coherence.
This view has radical consequences. If science does not simply represent reality, but is structured by paradigms, then there is no neutral, paradigm-free access to reality. Scientific revolutions cannot be decided by facts alone, because what counts as a fact is itself paradigm-dependent. Kuhn speaks of the “incommensurability” of different paradigms — they cannot be directly compared because they apply different standards.
Richard Rorty has developed this insight into a radical pragmatism. His departure from the “mirror of nature” marks a break with traditional epistemological models. His redefinition of truth as “what our community lets us get away with” initially seems strange. But on closer inspection, it opens an important perspective: Knowledge is always embedded in social practices. Truth is not timeless correspondence, but consensual validation.
This position gains particular poignancy in the current “post-truth” debate. The expression “fake news” has become a political battle term used to delegitimize legitimate reporting. At the same time, scientific findings on climate change are presented as “mere opinion.” If Rorty is right and truth is ultimately a social category — how can we then distinguish between justified and unjustified knowledge claims?
Here the ambivalence of pragmatism becomes apparent: On one hand, it undermines absolutist truth claims, but on the other hand, it needs normative standards to avoid slipping into pure relativism. Rorty’s answer lies in democratic practice — in open, pluralistic discourse that integrates different perspectives. But this discourse presupposes certain social and institutional conditions: a functioning public sphere, mutual respect, the willingness to take arguments seriously. Precisely these conditions seem increasingly endangered in times of polarized “echo chambers” and algorithmically amplified filter bubbles.
Rorty’s pragmatism stands in tense relationship to Kuhn’s paradigm theory. While Kuhn still speaks of paradigms that can be better or worse (measured by their ability to solve problems), Rorty goes further: There is no neutral standard beyond our practices. Science is not a privileged representation of reality, but a specific form of dealing with the world — alongside others such as art, politics, or everyday practice.
Consequences for our understanding of knowledge in the digital age
What follows from these different perspectives for our current understanding of knowledge in the digital age? First, a fundamental insight: Knowledge is not a passive representation of a given world, but an active practice. It arises in the interplay of perception and thinking, of individual experience and social exchange, of conceptual structure and sensory fullness. It is neither completely subjective nor absolutely objective, but intersubjectively mediated.
This insight takes on new dimensions in the digital world. The algorithms that filter and structure our information flows embody certain epistemological assumptions and simultaneously create new epistemological problems. Take Facebook as an example: The algorithm that determines which posts appear in our news feed is based on the assumption that “relevant” knowledge is that which reflects our existing interests and beliefs. This leads to epistemic “filter bubbles,” in which we are predominantly exposed to information that confirms our existing beliefs.
Google’s PageRank algorithm — which evaluates the relevance of websites according to the number and quality of their links — reflects a social-pragmatic epistemology: Knowledge is constituted through social networking and recognition. This curiously resembles Rorty’s pragmatic definition of truth as “what our community lets us get away with” — except that the “community” here consists of websites and their connections.
The algorithmic mediation of knowledge becomes particularly problematic when it reinforces itself. If I search for “vaccinations dangerous,” the search algorithm may present me with anti-vaccination sites. If I click on these, the algorithm “learns” that I am interested in such content and shows me more of it in the future. Thus arises an “epistemic downward spiral,” in which initial doubts are reinforced through algorithmic feedback.
The tension between empiricism and rationalism, which Kant sought to overcome, is placed in a new context by the linguistic turn. Knowledge begins neither with pure sense impressions nor with a priori principles, but with common linguistic practices. It is socially mediated from the beginning. This perspective helps us understand the current epistemic challenges of social media: If knowledge is socially constituted, then the structures of our digital communication are directly epistemologically relevant.
Another example: When scientific findings on climate change appear on Twitter and Facebook alongside conspiracy theories and misinformation, without clear possibility of distinction, the social dimension of knowledge becomes problematic. In the sense of Wittgenstein and Brandom, one could say: The “game of giving and asking for reasons” is disturbed when actors with radically different standards of justification operate on the same platform.
Kuhn’s and Rorty’s emphasis on the social and historical dimension of knowledge takes this insight further. Knowledge is not timeless, but develops in concrete historical contexts. The current “post-truth” debate can be understood in this light: We are not simply experiencing a decay of truth, but a struggle between different “truth regimes” — different paradigmatic ways of structuring and evaluating knowledge. The polarized political landscape reflects this paradigmatic conflict: Progressive and conservative communities increasingly live in “different worlds” with different epistemic standards.
These perspectives change our relationship to truth. It is neither a simple correspondence with an independent reality nor a mere construction. It arises in the productive tension between the resistance of the world and the active structure of our thinking and speaking. This insight helps us avoid both naive objectivism and radical relativism. The current climate debate illustrates this tension: Climate change is an objective reality that sets limits to our actions — but the evaluation of its implications and the choice of appropriate responses are inevitably shaped by values, interests, and cultural contexts.
This results in a fundamental ethical consequence: Knowledge demands responsibility. If our concepts do not simply represent the world, but help shape it, if our knowledge is not passively received but actively worked out, then we bear responsibility for the way we understand and describe the world. This responsibility seems particularly urgent to me today in view of algorithmic decision systems that make judgments on the basis of large amounts of data — be it in lending, personnel selection, or jurisdiction.
A concrete example: If an algorithm for assessing creditworthiness is trained on historical data that reflects structural disadvantages of certain groups, it will reproduce these disadvantages — even if it is conceived as “colorblind” and receives no direct information about skin color or gender. The supposedly neutral, data-based decision-making proves, on closer inspection, to be normatively charged. Here the actuality of Kant’s insight becomes apparent: Data without a conceptual framework is blind, and this framework is never neutral.
Final reflection: Key insights for the digital age
When we relate the discussed philosophical positions to the current epistemological challenges, the following key insights emerge:
The Kantian insight: Knowledge is an active process of structuring, not passive representation. This calls for caution against a naive data positivism that believes “the facts speak for themselves.” In the era of big data and AI, we must be aware that even seemingly neutral algorithms embody certain conceptual assumptions. The idea of “presuppositionless” data-driven knowledge is a dangerous illusion.
The Wittgensteinian insight: Knowledge is embedded in linguistic practices. Concepts do not function as isolated representations of the world, but as tools in specific “language games.” This helps us understand why dialogue between different knowledge cultures — such as between science and the public — can be so difficult. It’s not just about “facts,” but about different ways of linguistically accessing the world. Digital platforms must be designed to enable productive communication between different language games, instead of separating them into isolated echo chambers.
The Brandomian insight: Knowledge is a social practice of giving and taking reasons. This underscores the importance of epistemic communities with shared standards of reasoning. In the digital public sphere, we must create structures that support this “game of reasons,” rather than undermining it. Social media must become spaces where claims can be justified and critically examined, rather than amplifiers for unjustified opinions.
The Kuhnian insight: Knowledge is paradigmatically structured. Scientific revolutions change not only our knowledge, but our way of seeing and thinking. This sensitizes us to the profound epistemic shifts associated with new technologies. The change from a text-based to a data-based knowledge culture is not merely a methodological change, but a paradigm shift that challenges our fundamental epistemic categories.
The Rortyan insight: Knowledge is historically situated and culturally embedded. This preserves us from epistemic absolutism, without falling into relativistic arbitrariness. There is no “view from nowhere,” no neutral standpoint beyond all perspectives. But there are better and worse ways of dealing with our epistemic situatedness. In a pluralistic world, we must find ways to bring different knowledge cultures into productive dialogue, without blurring their differences.
These insights help us approach current epistemological challenges such as “fake news,” algorithmic biases, or the gap between expert knowledge and public opinion in a more differentiated way. They show that epistemology is not an abstract philosophical discipline, but an urgent practical task in an increasingly complex informational environment.
Outlook: The future of knowledge
Where is our understanding of knowledge heading in the future? Three tendencies are emerging:
First, we are experiencing an increasing hybridization of human and machine knowledge. AI systems will not only be tools, but active partners in the knowledge process. This raises new questions: How does scientific knowledge change when AI systems are involved in its production? What epistemic standards apply to human-machine hybrids? Do we need a new epistemology that goes beyond the traditional separation of human subject and known object?
Second, we see a growing importance of embodied and situated forms of knowledge. Contrary to the Cartesian separation of mind and body, newer approaches emphasize that knowledge does not only take place in the head, but encompasses the whole body, its interaction with the environment, and its social embedding. This perspective gains particular relevance in view of the virtualization of our lifeworld: If more and more experiences are digitally mediated, what does this mean for our embodied cognitive ability? How does our perception of the world change when we increasingly interact with it through screens and algorithms?
A fascinating example of this is the development in the field of virtual reality. When I put on a VR headset and move in a virtual environment, my body generates knowledge that is not purely intellectual. I “know” where I am in virtual space, not because I think it, but because my body feels it. This form of embodied knowledge challenges traditional notions of knowledge as purely mental activity.
Third, we are experiencing a renegotiation of epistemic authority. The traditional instances of knowledge production and mediation — universities, quality media, scientific institutions — find themselves challenged by decentralized forms of knowledge acquisition and dissemination. This harbors both democratic opportunities and epistemic risks. How can we promote the diversity of knowledge access without undermining the binding nature of well-founded knowledge? How can we rethink epistemic authority, beyond blind expert trust on the one hand and egalitarian “my opinion is as good as yours” on the other?
This renegotiation becomes particularly visible in science communication. While the COVID-19 pandemic underscored the importance of scientific expertise for societal decisions on the one hand, it showed on the other hand how fragile trust in this expertise can be. The question of who can claim to speak in the name of science became a central political conflict. Here we see that epistemic authority is not simply given, but socially constituted and continuously negotiated.
These developments challenge us to develop an understanding of knowledge that does justice to the complexity of our current epistemic situation. This understanding must recognize the active, constructive role of our thinking (Kant), consider the social and linguistic embedding of our knowledge (Wittgenstein, Brandom), capture the paradigmatic structure of scientific revolutions (Kuhn), and reflect the historical situatedness of all our knowledge claims (Rorty) — without falling into a paralyzing relativism.
Perhaps the key lies in a new epistemic humility. We must acknowledge that our cognitive ability is always limited and perspectival — but at the same time insist that not all perspectives are equal. Some ways of accessing the world are more productive, coherent, and responsible than others. The challenge is to articulate these qualitative differences without falling into dogmatic certainty.
Knowledge does not begin with certainty, but with attention. It requires the willingness to question oneself, to reflect on one’s own assumptions, and to discover new perspectives in dialogue with others. In a time characterized by fundamental scientific, technological, and cultural shifts, this attitude seems more important than ever to me. Perhaps the future of knowledge lies not in knowledge as fixed possession, but in knowing as an open process — a path that emerges anew with each step.
This path leads us back to the initial question: How do we attain knowledge? The answer lies not in a single method or theory, but in the permanent critical reflection of our epistemological practice. In a world where algorithms filter our information flows, where complex global challenges such as climate change or pandemics require new forms of knowledge, and where public discourse appears increasingly fragmented, this reflection becomes a societal necessity.
The philosophical tradition from Locke to Kant to Rorty offers us no ready-made answers to these challenges, but it sensitizes us to the complexity of the knowledge process and the diversity of its dimensions. It reminds us that knowledge does not simply happen, but is shaped — through individual acts of thinking, through social practices, and through institutional structures. In this shaping lies our epistemological responsibility: a responsibility that is more urgent than ever in times of algorithmic decision systems, global information networks, and complex scientific-technical challenges.
Knowledge is not a state, but a process; not a possession, but a path. And on this path, we are never alone. We walk it together — with other people, with technical systems, and with the world itself, which both resists and presents itself to our knowing. In this joint being-on-the-way, in this productive tension between knowing and not-knowing, perhaps lies the essence of knowledge itself.