#NotAlways Shortcuts
Reflections on How Tech Shortcuts like AI Tools Could also Become Longcuts
Hello Dear,
Tell you what, I have been sitting for quite a few minutes before having now begun to write this line. The reason is not quite writers’ block, but perhaps its opposite - writer’s flow, if I may say so, where one is bombarded by different ideas, making one kind-of paralysed, unable to know which ideas to pursue. Speaking of which, here is one article on finding flow in writing.
Coming to the simple idea for today’s article, let me start by narrating an anecdote which has inspired me to write about this idea now. It involves the case of a fellow colleague who wrote an article on a current topic of interest. When I came to know about it, I messaged her, informing that I have seen it. However, this is what she had to tell me: “Please don’t read it. Not good at all. It was just an ATR (Action Taken Report) kind of article. Could not get any inputs, and I am not happy about it.” On further prodding, she said something else on similar lines, which made me ask her, why she thinks so lowly of her output or contribution. She replied that it was a lazy piece of writing; that AI worked better than her brain, that she did not like how it came out, but had to submit it anyhow!
Now, I must confess that I could only do a quick read of the article so far, and that I am yet to critically analyse the article. And based on this “quick read” as well as on my understanding of this colleague, I am confident that the article would have come out to be much better than what she says and thinks it is. In other words, I believe a fair share of the lack of satisfaction my colleague feels stems from the high standards she has set for herself, as well as from her innate modesty and humility.
However, I don’t think this is the whole story. There have been instances where this colleague too has been happy and proud of her work and contribution (as expected). So, what else is at play here, which has led to her not finding joy and satisfaction in her work (this piece of work, I mean)?
I think the explanation is already revealed in my conversation with her: it is a two-fold answer, with two distinct causes which are however closely related with each other.
One, “It was just an ATR (Action Taken Report) kind of article”. In other words, the primary (or only) purpose of the exercise seems to have been to tick some boxes in some report. The source of the motivation behind the task was the quest for compliance with some executive order or instruction or direction, and not creation of enduring meaning. It is another matter that my colleague may have invested energies for creating meaning, even if or especially if the original motivation was merely compliance. But it seems that she was unable to make much of an investment in this direction, which is not surprising, given that it was not an organizational priority in any case.
Now, what does this focus on box-ticking lead to? I think it leads to the promotion of an ethic of giving primary importance to speed and efficiency. On getting the work done and being able to tell and show that it has been done, with the quality of the work becoming a secondary consideration. In other words, the box-ticking and report-producing imperative nurtures a culture where the focus tends to be efficiency (doing things fast, doing more with less), rather than effectiveness (doing the right things).
This brings us to what could be the second reason for the low level of satisfaction my colleague derived from her article: “It was a lazy piece of writing and AI worked better than my brain, I did not like it, but had to submit it anyhow!” What I infer from this comment and from what I know of the situation independently is that my colleague was under severe time pressure, and hence was in no position to find the requisite time, effort and mental space to pen down the whole article herself. Hence, she took what to many seems like and is being heavily marketed as a shortcut: she used some AI tool which did a good part of the work for her. But - here is where it gets really interesting - she did not like it!, even though she says AI worked better than her brain. And being starved for time, she ended up having to submit the piece of work anyhow.
One question I would like to pose here is regarding the relationship between our use of technologies like AI and our level of professional satisfaction. AI tools or other technologies seem to be making the promise to make our work easier. But does this necessarily mean that we will hence be able to do work which is easier, faster and better as well? How does use of AI tools influence the degree of satisfaction and fulfillment we are able to derive from the work we do? In which cases do we feel more satisfied and fulfilled, and in which cases do we feel less happy when we use AI for assistance?
Well, a quick online search suggests to me that use of Generative AI tools has been found to be increasing workplace satisfaction!, by helping employees become more productive, happy and engaged at work. Either way, in this particular case, it is my hypothesis that the use of AI tools led to my colleague being less personally involved and invested in the piece of work she was producing or creating. The use of the medium of technology acted as a barrier between her and her work. The medium became a wall, which my colleague had to cross in order to connect with her work.
Alas, she ended up in a situation where she could not and did not even want to call her work as hers. Because she feels that she has - half-knowingly or unknowingly - outsourced a part of the work to an outsider - the technology, which in this case, appears to be vested with not just physical muscles (like a hammer), but a brain as well. How real is this appearance (of a brain) is another question, which we would hope to address later.
In sum, the focus on box-ticking and in turn on efficiency seems to have encouraged the adoption of a short-cut, which ended up being “the most unkindest cut of all” (if I may be allowed a small degree of poetic liberty). It produced a piece of output, and it might even have been good (even very good, as I said before), but it has taken away from the creator of the work the feeling of personal satisfaction and fulfillment which creators get upon producing anything creative.
Should this be a matter of concern for us? Should we be entitled to the sense of satisfaction and fulfillment which arises from the process of creating something of value? Or is it a vain concern?
Tell you what, I am not sure I am able to think clearly about this issue, and hence, I doubt whether I am raising the right questions, let alone giving the right answers, even if they be in the form of hypotheses. Either way, I do feel that there is a sense in which the use of what seem like short-cuts, such as AI tools, can dilute one particular source of the satisfaction we get from the work we do.
What I am referring to is the satisfaction we derive from the creative process, or the process of what could be called creative labour. Creating something, anything requires tapping into and connecting with certain parts of our self and society, and the world within, around and beyond us. The process involves uncertainty and may end up in failure or even unexpected success. It hence involves a significant amount of hard work as well, whether the difficulty be physical, mental or both. And when we genuinely create something, we invest our own selves into it. This is why we tend to feel highly attached to our creations or pieces of creative output, as if they are our own children.
Here is my doubt. Could the use of tools such as ChatGPT and Grok lead to a situation where we end up using less and less of our creative labour, having outsourced this to these tools? Could it thus affect the quality of our creative work, it being less a signature of ourselves and more a reflection of influences we have passively used but not quite internalized or perhaps not even properly understood?
Indian author Arundhati Roy raises a similar concern, though in a rather different context (of information overload), where she asks, “are you really sure that they are your own feelings?”
Another concern is whether the use of these tools could lead to the withering away of our own innate creative and intellectual abilities. Technology is already leading to (and has always led to) the weakening of some or many of our human faculties. For instance, our use of the computers and the internet has led to a deterioration in our memory and attention span. The point is persuasively made by Nicholas Carr, in (at least) two of his books. One of them, we have seen in the previous article too: The Shallows, which tells us the mostly alarming story of what the internet is doing to our brains.
The other one is The Glass Cage - How Our Computers Are Changing Us.
I do understand that there is a huge volume of material which promises how we can become more creative and productive, with the use of technology, especially AI tools. Nevertheless, as you would be aware, there are voices which regard these tools as distractions (and worse) as well. I would hope to come to this in later articles of this #NotJust newsletter.
For now, let me close by sharing my view that we need to focus on #NotJust efficiency, #NotJust speed, and #NotJust on making our work easy and fast. Given below are some related One Doubt Please articles on this theme.
#NotJust Ready To Learn The Hard Way
Reflections on Why Learning Should Be Hard!, #NotJust Allowed To Be Hard