Education, we continually hear, is in a crisis.  Not only is this cry overwrought, it is false: the word crisis comes from the Greek (krisis), where it was a medical term meaning the point in an illness where the patient will either recover or become irrevocably worse and inevitably die.

Education–by which I mean specifically the formally established and recognized institutions within the United States of America (and generally the Western sphere)–has long since crossed the threshold towards death.  There is simply no saving the system as it currently exists, in large part because bureaucracy has sown itself throughout every level and taken over like kudzu; anyone who has spent even a brief time in the academic trenches can attest to the suffocation experienced.  But the educational system is dying also in part, and more fundamentally, because the attitudes and beliefs about education which give structure to the system are poisonous.  A saccharin sentimentality combined with belief in computational modelling of the mind and the pan-informational processing paradigm, both encompassed by an uncritically assumed nominalist pragmatism, has sundered the moral relevance of education–including most especially that there is such a thing as non-empirical, non-quantifiable truth–from the practice of educating.

If education is to be reborn a healthy being–and the fortunate truth of cultural realities is that, unlike substantial living creatures, their death does not preclude their rebirth–it must come from a deep understanding of what thought is as well as a grasp of what are the influences on how we think today (McLuhan 1962: The Gutenberg Galaxy, 9): “Diagnosis and description must precede valuation and therapy.  To substitute moral valuation for diagnosis is a natural and common enough procedure, but not necessarily a fruitful one.”

Here, I want to take a look at the technological factors which have a pervasive influence in our environments.  I am, again, working off the thought of Marshall McLuhan, within the framework of a Thomistic psychology, and with a semiotic perspective (that is, such that mediation between any two or more things occurs by virtue of a sign, the being of which is a triadic relation comprising object, interpretant, and sign-vehicle).  More specifically, I am concerned with the “electric” environment–light, television, telephone, and anything which fosters instantaneous communication (including some capacties provided by the internet)–and the “digital” environment–which, we might say, comprises the coded structuring of discretely categorized knowledge made available to us through computers, smart phones, and the internet.

My generation and those subsequent to it, living today, have all been reared in the digital environment; which is not to say that the electric environment has disappeared, but rather the two have overlapped in a multitude of often unrealized ways.

How does Technology Affect our Thinking?

McLuhan, having often stated that media are extensions of our natural capacities, said of electric technology that, rather than extending our discrete exterior senses (such as print extends our visual capacity, at the expense of the others), electric technology extends the central nervous sytem: that is, rather than an avenue for thinking, electric technology extends our capacity for feeling.  Whereas the newspaper, for instance, made people think about what was happening around the world (although the nature of print media is grossly oversimplified by such a statement), the televised newscast, replete with video footage and audio recordings, makes us feel the horror.  There were no recordings of Hiroshima or Nagasaki; but Vietnam was the first war to appear on television.  And who was not overwhelmed with shock, or fear–worry, anxiety, sadness–when watching the second plane smash into the World Trade Center?  Who watches “reality” television for anything but the emotions–hope or admiration in talent shows, Schadenfreude in almost all else–that they are deliberately structured to evoke?

The extension of the central nervous sytem, however, has spilled over into behavior outside the scope of mere television watching; that is, we behave all-too-much in accord with the emotional fluidity of a television-environment.  Without ascertaining the precise causes, Richard Weaver noticed this in his 1948 Ideas Have Consequences, in which he levels an accusation of “mere sentimentality” (or sentimentality for sentimentality’s sake) against 20th century Western societies, which he ties but only loosely to the “Great Stereopticon” of our electric environment, in which we are enveloped by constant news, information, by radio and television (I often wonder how Weaver would react to the internet).  It seems to me that no one  with an adequate knowledge of history could claim a period in Western civilization–even the Romantic–since at least the Carolingian renaissance in which sentiment possessed a stronger hold on the most important convictions of the educated populace than it does today–at least, educated in the sense of having acquired basic literacy through widely-recognized institutional means.

Why this qualification?  Without getting lost in the details: much education in the past has conflated teaching how to think with teaching what to think; to invert the McLuhan quote above (1962: The Gutenberg Galaxy, 9), valuation and therapy often preceded (and eclipsed) diagnosis (or discernment).  Unfortunately, the reaction against this was to consign valuation to the sphere of the subjective–which makes a hollow shell of diagnosis.  In other words, “objectivity” and “value” are two realities which never should, cannot be allowed, to meet or coexist in the same realm: the latter is inescapably subjective, belonging to the idiosyncratic judgment of the private individual.  Attempting to teach students how to think while avoiding any and all making presence normativity about what one ought to think (cf. the notion of “values clarification” pioneered by Abraham Maslow, which, even when it has not been directly employed, has nevertheless diffused itself into our educational institutions–such as the ubiquitous question asked of students, “how do you feel about that?” when the question ought to be, “what do you think of that?”) hollows education out into little more than preliminary vocational training.

That’s not to say “television is the Great Satanic Evil which has Intellectually and Morally Bankrupt’d Us All!”  It is good that we can feel (some) things more acutely at a great distance (some times).  It is good that there is such a rich medium for conveying rich, emotionally-compelling stories–in some cases, given that the stories themselves are truly good.  Television is not as such an evil (nor are other electric-environment constituting technologies; electric light is wonderful).

But it is to say that if we allow ourselves full immersion in the psycho-technological environment of television, we become emotion-driven zombies, bereft of our full faculties of understanding; dissociating feeling and truth results in the ennervation of both.

On the contrary, digital technology–as is one of the theses being argued by the Center for the Study of Digital Life–reverses this trend: whereas electric technology is a constant “flow” of the ephemeral feeling of lived experience, digital technology is inherently categorical and archival.  The newscast comes and goes; the news story posted online goes in to an archive–oftentimes, even, each subsequent edit going into an archive.  Though not news, consider the structure of Wikipedia: each and every change is logged, and each and every version of each and every page is logged; who said what about which and when.  But it does not tell you why, an important but inevitable omission.

At any rate, the digital psycho-technological environment is constituted in large part by the presence of codes and rules.  Whereas masters of the television environment are manipulators primarily of emotion, masters of the digital environment have the potential, at least, to be architects of thought.  The entire framework of digital computing is constituted by bits of representational information  Much has been written on this (see, for example, the work of Douglas Rushkoff)–some of it kooky, some of it insightful.  I’m not concerned here with sorting it all out, nor even with specifying precisely what the digital technologies are which have such potential effects.  Rather, I am concerned with the general effects such technologies might have specifically on education.  Consequently, I am going to focus on the two aforementioned characteristics of digital technology: that it is archival and categorical.

That is, digital technology is archival in the sense that, in order to operate at all, digital technology needs to preserve its previous processes, in some form or another.  Each process receives it determinations in sequence from the previous processes.  Anything which proceeds sequentially is something which requires at least some preservation of the past, and which is innately open to continued and ever-expanding preservation, given the technical capacity.  Thus, digital technology is inherently archival; today, this is manifesting itself in the rich possibilities for verification which belong to blockchain (currently used primarily for managing digital cryptocurrencies, such as bitcoin, but being considered for many other applications–such as managing legal contracts, or personal identity).

But it is also categorical in the sense that, in order to operate in a fashion meaningful for any interpretants, it must not only preserve the information generated by its previous processes, but categorize it as some kind by means of some code.  Anyone who has ever done any computer coding is familiar with the various steps needed to initialize different classes and routines, with conditional statements and establishment of variables, and so on.  Digital technology operates through an endless series of categorizations.  To give examples: currently, on my tablet (where I reviewed this essay this morning over my coffee), I am running a few apps: Slack, Drawboard PDF, StickyNotes, and Microsoft OneNote.  I also have Windows Explorer open to my digital library.  In Slack, I am part of only one team (the CSDL discussion group on perception), where I am currently in 15 channels of conversation; in Drawboard I have open two works of Heidegger, two works of semiotics, and McLuhan’s Gutenberg Galaxy.  On my StickyNotes, I put quick thoughts that I’m not sure yet how to categorize but which have occurred to me as important enough to preserve.  In OneNote, where I compile all of my research and notes, I have twelve “notebooks”, each of which has at least three subdivisions, which are then divided into pages (and subpages) beyond my patience to count.

In other words, just one of my devices is currently at work in the collaborative preservation and categorization of multiple on-going discussions according to different traditions on the topic of perception, the archival storage and retrieval of philosophical texts, and the extensive process of division and composition which constitutes academic research.  Merely seeing the icons on my taskbar brings to mind the general idea of each such task, and within each app, I find both the preservation of more (precision, at least) than my mind could handle on its own as well as the organization which such complexity requires–all of this through a device that weighs less than two pounds and has gone nearly everywhere I have for the last year and a half.

These two broad characteristics, the archival and the categorical, of such adaptive, capacious, fluid, easily-transported technology raise important questions, not the least of which is: how should education adapt to this still-developing, still-emerging psycho-technological environment?

A False Path: the informational processing paradigm

Let me start with how it should not adapt, but how it unfortunately has so far tended to go: along what we might call the pan-informational processing paradigm.  Between the work of Claude Shannon and Norbert Wiener (the latter of whom had his ideas ultimately misappropriated), the concept of information was conceived as the negative of entropy–that is, the state where absolute homogeneity has rendered change impossible–and therefore to communicate information was conceived as the introduction of a difference.  On an abstract level, this framework has extremely broad applications and has dramatically increased the development of information technologies.  Working at that same abstract level, many believe it also to apply to human cognition: in other words, human brains are seen as highly advanced computers–processing information in evidently a different and more complex manner (i.e., multiple hierarchies of parallel processes), but nevertheless performing essentially the same kinds of operations as computers.

The often-quoted statement of cognitive psychologist Steven Pinker (1997: How The Mind Works, 24) that, “The mind is what the brain does” exemplifies this view.  Put otherwise, an uncritical appropriation of the pan-informational processing paradigm ends up holding that the brain is hardware and the mind is its software.  The subsequent implication is that brains can be improved with the right programming–and brains which reject the right programming must have hardware deficiencies.

Some may find the reductionistic nature of this paradigm to be morally repugnant; but that alone cannot be sufficient reason to reject it.  Rather, if it is to be rejected, it must be because there is something false or inaccurate in its structure.  Fortunately for those who find it morally repugnant, there is.

Digital Archives – a row inside a server farm

What is the difference between a human being, as an intellectual being possessing a brain and a mind, and a computer, as an organized collection of hardware capable of running software?  The first, obvious, and easily overlooked (in terms of its final impact) of the two differences considered here is that the former is organic and the latter artificial.  I will return later to why this matters so much.  The second difference is that computers are specifically engineered and streamlined for processing information, whereas human beings, although we do something comparable, are far more complex.  Reducing human cognition to the paradigm of information processing strips away not only the majority of what constitutes human cognition, but within that, the most important elements: namely, the irreducibly subjective quality of lived experience and the universal truth of meaning.


While it is easy, particularly in the psycho-technological environment of television, to make too much of the irreducibly subjective quality of lived experience, this does not mean it is not important.  The subjectivity of experience becomes pernicious only when it is conceived of as raw, unadulterated feeling in opposition to the cognitive, understanding, thinking aspects of the human person.  Though emotion and reason can be and often are in opposition to one another, this is not a necessary condition, but a result of one running contrary to the good of the other.  Moreover, through lived experience we encounter the possibility of the disclosure of a mode of being–that of agency, and thus of self-possession and self-governance (cf. Wojtyla 1975: “Subjectivity and the Irreducible in the Human Being” in Person and Community, 214)–which cannot be grasepd as such through conceptual objectivization.  Information, which is no more than a tool of possible meaning, cannot capture the mode of disclosure proper to lived experience.

Speaking of information as a tool of meaning: information itself neither is meaning nor does it contain or constitute meaning.  What is information?  Ask yourself: has anyone ever given you a truly satisfactory explanation of what it is?  Have you ever asked for one?  The word “information” is one of those words that we all presume ourselves to know what it means, but which none of us can seem to explain–which, to me, indicates that we don’t really know what it means, but use in a way which we presume is correct.  Most people, when pressed, will say that information is something which tells you about something else, what something is, when something is, etc.  An information theorist (or systems theorist) might tell you that it is the eliminative determination of possibilities by selective transmission.  Meaning has no place in such a theory.  In other words, no amalgamation of atomistic bits of information (or if one prefers, data turned into information) can provide what we understand (or experience) as “meaningful”.  Some might conjecture that, therefore, “meaning” is therefore an illusion created to “make sense” of the world around us–which conjecture, a common contemporary nominalism, is nothing but an inverted gnosticism: “The silly uninitiated believe there is some secret ‘meaning’ to things, when the reality is that our brains simply fabricate these ‘meanings’ in an attempt to make sense of our selves and our environments.”

In other words, the information and systems theorists say that “meaning” isn’t real, because what is real can be reduced to information, and “meaning” cannot, so it must not be real.  This question-begging is nothing new; though that hardly makes it excusable.

Of course, that raises the question as to what meaning really is (which I’ve touched on a little bit here); but for the moment, let us just ask ourselves whether or not we have experienced something as meaningful, or found ourselves to struggle with the meaning of something.  Probably, the experience of meaning is ubiquitous in your life; and probably, you have found yourself wrong about the meaning of something–which suggests that it is not something purely subjective, nor is it purely illusory.

If we want an educational paradigm which reflects the reality of meaning, then, we probably do not want to treat the digital psycho-technological environment as though merely an extension of our brains as informational-processing equipment.  The pan-informational paradigm has an attraction given the ubiquity of computing in our lives: we have greatly advanced our technological prowess through its eliminative, reductive, abstractive approach to signalling and communication.  But to treat animal cogniotive experience, and especially species-specifically human cognitive experience, within the paradigm of information not only ignores a richer semiotic reality, it ennervates the possibilities of the digital environment.

The Structure of Memory

That is, to understand the possibilities of digital technology as an extension of the natural capacities of the human being requires a more careful consideration of its innate characteristics as archival and categorical–and also to understand how those capacities of the human which it extends fit into the overall teleological ordination of the human person.

First, the archival.  The human psyche preserves the objects of its experience in three ways: with regard to qualities of exterior sensation (such as colors, sounds, particular images considered as context-independent, etc.); with regard to experiences, such that we encounter the world (Umwelt) in accord with an already-present framework disposing us (Innenwelt) towards the objects of that world, endowing them with a meaning-for-ourselves; and with regard to the intellectually-understood qualities of our experienced objects, such that the objects are understood as having meanings-in-themselves.  In Thomistic psychology, these capacities are, respectively, the vis imaginativa, the vis memorativa, and the intellectus possibilis.

But these capacities do not merely store indiscriminately; that is, they categorize as well, not only in terms of distinct kinds of objects (as sensibles vs. experiences vs. meanings), but within each kind: we discriminate colors from sounds, and one color from another; good experiences from bad, and one good experience from another; truths from falsehoods, and one truth from another.  Every cognitive act of preservation–even when we are unconscious of the categorization–is inherently ordered towards distinguishing that which it preserves.  Yet this is not the whole story: the distinctions that we make in the objects of our understanding, the discrminations of part from whole and part from part, are for the sake of better understanding the whole.  We distinguish in order to unite–not at the whim of creative imagination, as constructivists and some nominalists would have it, but to understand and to act accordingly with that understanding.

Vatican Library

To elaborate: among the vis imaginativa, vis memorativa, and intellectus possibilis, only the former retains without some internal active operation; that is, there is no vis memorativa without the vis cogitativa, and nothing is received in the intellectus qua possibilis without the intellectus qua agentem.  Moreover, the mere reception and preservation of sensibilia by the vis imaginativa of itself does nothing for the human being; without the vis cogitativa, it may receive, but left alone, that reception amounts to a meaningless heap of sensation’s results.  To become meaningful, to be part of the animal Umwelt, the object of reception must be subsumed into an interpretation.  To be understood, as an object of intellection and therefore grasped according to the meaning of the thing itself, that interpretation must be rich in experience.


Super Sententiam, lib.3, d.14, q.1, a.3, qc.3, c.
Ad tertiam quaestionem dicendum, quod ex hoc ipso quod intellectus noster accipit a phantasmatibus, sequitur in ipso quod scientiam habeat collativam, inquantum ex multis sensibus fit una memoria, et ex multis memoriis unum experimentum, et ex multis experimentis unum universale principium, ex quo alia concludit; et sic acquirit scientiam, ut dicitur in 1 Metaph., et in fine posteriorum, Lib. 2, text. 37; unde secundum quod se habet intellectus ad phantasmata, secundum hoc se habet ad collationem. Habet autem se ad phantasmata dupliciter. Uno modo sicut accipiens a phantasmatibus scientiam, quod est in illis qui nondum scientiam habent, secundum motum qui est a rebus ad animam. Alio modo secundum motum qui est ab anima ad res, inquantum phantasmatibus utitur quasi exemplis, in quibus inspicit quod considerat, cujus tamen scientiam prius habebat in habitu. Similiter etiam est duplex collatio: una qua homo procedit ex notis ad inquisitionem ignoti; et talis collatio non fuit in Christo; alia secundum quam homo ea quae habitu tenet, in actum ducens, ex principiis considerat conclusiones sicut ex causis effectus; et talis collativa scientia fuit in Christo.

[This is a rough and very hastily-done translation.  Don’t judge me.] To the third question, it must be said that on account of the fact that our intellect receives from phantasms, it follows that knowledge relates to a collation, insofar as from a multitude of sensations there is produced one memory, and from a multitude of memories, one experience, and from a multitude of experiences, one universal principle, from which others are concluded [or derived]; and thus we acquire science, as is said in Metaphysics Book I, and in the end of the Posterior Analytics, Book II, text 36.  Thus, insofar as the intellect relates to phantasms, by this it relates to a collation.  It is related to phantasms in two ways: in one way, as receiving knowledge from phantasms, which is in those things of which it does not yet have knowledge, according to a motion which is from things to the soul.  The other way is according to a motion which is from the soul to things, insofar as phantasms are used as examples, in which we examine what is considered, of those things which we already first have knowledge in habit.  Similarly, collation is twofold: one by which man procedes from the notion to inquiry of the unknown; and such collation does not occur in Christ; the other according as man holds those things by habit, in act which leads from the consideration of principles to conclusions as from causes to effects; and such collative knowledge occurs in Christ.

In other words, we do not attain understanding merely from images, but from sufficiently rich experiences which encompass a bevy of interpretative memories for those images; and we complete our understanding by returning to experiences, the fullness of our understanding equivalent to the image-and-memory richness of those experiences.  On what do we draw when we use examples to convey an idea?  You might say “imagination”–and a strong creative imagination is certainly useful; but strong creative imaginations usually go hand-in-hand(-in-hand) with strong sensory retention as well as a strong memory of experiences-as-interpreted in the Umwelt.  Someone who has never experienced strong temptations can have little understanding of what temptation is; and how we respond to such temptations has a powerful impact on our understanding as well–to give in and experience the intensity of some pleasure, or to resist and experience the virtue of self-possession.  The richly-developed phantasm, which includes not only the external sensibilia, but also the internal, is the necessary correlate of any principle which can be derived from our experiences.

In other words, the atmosphere of the lived experience as a whole plays a crucial role in the process of learning.  Human beings are not computers dealing with stripped-down, minimized streams of abstract data; we deal with fluid realities which are infinitely divisible by categorization.  The locus of this experience is the singular and irreducible subjectivity of the individual human person, which, to be dealt with fully, must be considered not simply in the categorical, conceptually-objectivized sense of the content of experience, but also in lived experience–those things that a person does, as a self-possessing, self-governing agent, and those things that happen within a person, as the recipient of happenings beyond one’s own control.  As we will see shortly, this importance of lived experience poses a  challenge for the future of digital education.

The Memory of Structure

First, though, we need to account for the universal aspects in our experience–that is, in the previous section, we detailed the importance of the personal and the subjective (as in, what a person does as a responsible agent and what happens within the individual person) in memory; but this should not be taken to mean that there are no suprasubjective elements to our memorative capacities (cf. Deely 2009: Purely Objective Reality, 9-15 on why I do not use the word “objective”).  For throughout the objects constituting our lived experience, there are formal aspects which give an intelligible structure to the content, and in structuring the content, give rise to possibilities for how lived experience might unfold.

That is, the existential fact that something is remembered–and consequently, considered in some regard, as e.g., important, stupid, worthwhile, good, bad, etc.–takes root in the lived experience of the individual.  But the existential cause of that fact is something which transcends any individual: the reality through which the experience is had and thus remembered at all.

The persistent temptation of Western philosophy, in struggling to deal with this truth, is to impose on the suprasubjective reality the structure of the subjective grasp of it.  Thus, Plato, in recognizing that we understand universals, said that universal ideas themselves compose true reality, and particulars are deficient participants in the Ideas.  Many medievals, adopting the implicit framework in Aristotle, misconstrue abstraction as the liberation of an invisible, intelligible, framework from a sensible, material container.  Modern philosophers, having recognized the subjective aspect to knowledge, inappropriately divided the “subjective” from the “objective”.

As is often the case, the reality is far more complex, and these “epistemological” problems–as they are considered today, at least, and for at least some centuries–presuppose an opposition between subjective and objective which simply does not exist, treating them as separate spheres of reality rather than correlates which are what they are precisely as defined by the other; and oftentimes, presuppose also an opposition between formal and material, between rational and irrational, intelligible and unintelligible, which does not exist.  That is, while it is true that we relate to one and the same object with powers of intellection and powers of sensation, to the “formal” and the “material” that does not make the sensed thing itself, nor components of it, unintelligible or strictly material.  Rather, insofar as it is an object of sense powers, it is material or unintelligible, or as an object of intellection, formal and intelligible; but in itself, in its own ontological status, it is both without distinction.

All of which leads to this point: everything is structured.  Some things are structured better, more clearly, more cohesively with our mode of understanding, and some structure is more important than others.  Discovering, discerning, studying, and memorizing the important structures of the world around us are key actions to any meaningful education.  When I taught grammar and composition for 7-9th grades at a homeschool co-op in Houston, I stressed the importance of memorizing certain principles of grammar–but most especially, memorizing them in context, and not merely by rote.  When they would diagram sentences (immensely helpful for some, but not all students), I did my best to explain why the phrases and clauses were diagrammed as they were; to give reasons for each line, each connection, each relation; to explain the logic behind, for instance, elevating a prepositional phrase.

That is, while grammatical memorization is a good tool for further pedagogy, grammar itself is also a means of insight.  The way in which we structure our language is or ought to be a reflection of the structure of the world as we discover it.  The particular rules of any language are conventional, to be sure, but they fulfill a natural function and a natural desire–the desire of making sense of that which we experience.  This is why the rightful place of grammar is as the basis of the classical trivium: for logic and rhetoric are the sciences concerned with the norms of truth and the means of effectively communicating that truth to others, both of which tasks require the kind of structural insight which grammatical education provides.

The Challenge for Digital Education

The psycho-technological environment of the digital age can provide means for a better grammatical education than has existed in generations.  Emphasis today on computer programming is already, in some sense, affecting this; for computer programming consists in understanding and using languages, and each computer programming language requires knowledge of the rules of syntax.

St. Thomas Aquinas (1225-1274)

At the same time, there is a distinct challenge for which the digital psycho-technological environment has an answer: that is, the understanding of rules alone is not sufficient for a memory which engages fully with meaning in an insightful manner, but rather such a memory requires a multi-dimensional phantasmal richness.  Currently, the digital environment has been developed primarily upon an informational-processing architecture, and therefore on the basis of a manner of thinking which is mostly abstracted from lived experience.  Ignoring the importance of lived experience will result only in a perpetuation of the electric-television environment’s dominance over sentiment and moral leanings, which are integral parts of the human psyche.  We are not angels devoid of emotion, nor are our emotions sequences to be programmed by reason; and as Thomas says (1266-68: Summa Theologiae prima pars, q.81, a.3, ad.2), following Aristotle, the appetitive powers of sensuality are ruled not with a despotic, but a political rule, as are free subjects–i.e., subject to government of the ruler, but as having something proper to their selves.


When we think about how a piece of technology functions as a sign-vehicle, we need to understand both how it presents its object and how it determines its interpretant.  In his later work, Peirce would often distinguish between immediate interpretants and final interpretants.  The immediate interpretant is the how of the determined’s orientation towards the object; the final interpretant is the why, the purpose aimed at by the process of determination.  If digital technology presents us with stripped-down, abstracted “structures” of being apart from the richness of their concrete, tactile, sensible reality, then the immediate interpretant of its determinations is reason treated apart from the fullness of lived experience, apart from the full spectrum of the human psyche and the final interpretant is for an excessively digitized human behavior–the perpetuation of the computational modelling of human existence.

And so the question with which I end this rambling post is: how are we to achieve a holistic, phantasmally-rich, memory-strengthening, structurally-insightful, healthy lived experience-fostering education in the digital age?  I don’t know.  I suspect we need to ensure we maintain rich imagery, voice, music; beauty, in short, and human contact, human context, human passion–all the tactility of the sensible, but without the ephmeral fleeting sentimentality of the television.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s