AI, Death and Grieving – The Future
“In this world nothing can be said to be certain, except death and taxes” wrote Benjamin Franklin in a letter to Jean-Baptiste Le Roy in 1789.
Death is a natural step in everyone’s existence on this planet, and the last one. But just because it is natural, this does not equal easy. Whoever stays behind must go through a period of grieving in which they will miss their loved ones. The ones grieving will spend forever wishing they could still see their loved ones and talk to them.
The people that are left behind go through a tough grieving period in which they will miss the person that died and wished they could still see and talk to them. It is a slow, miserable process, getting used to someone’s absence.
But what if it could be made easier? What if with the help of technology, you were able to have your loved one still in your life? What if you did not have to mourn and grieve?
With the advancement in technology, this will all become a reality. In one hundred years-time we will be able to create a humanoid droid and upload an AI version of the person that died. This design will stay around to help the living accept their death and move on in a less painful way. The technology will be designed completely for you, functioning just as your loved one once did. This product will be a monthly or yearly subscription and can be cancelled at any time. This lenient subscription allows those who are grieving to be in control of their pathway to guide their process right up until the end.
As you mentioned about becoming attached you the greifbots and there being a constant reminder of the loved one who passed, do you feel this could significantly increase or worsen mental health problems that may or may not have been displayed by the methods of grieving we use today? For example, increased denial.
Is essentially living forever a good thing – taking away from the natural components of the circle of life?
From the research we believe so as we have learned to grieve and have been doing this for 1000 of years, so to now have a constant reminder as of right now you may look back on messages or Facebook but you can come back off of this and you are back to reality. Where with the AI Robot you are really bring someone back to life to live in your reality and hindering your emotional state, the positive of having the Griefbot is that people cannot control grieve and maybe their last talk was an argument this gives people a chance to change that last conversation to help move on.
I really enjoyed watching this, well done!
I’ve never thought about a griefbot before but now that I am thinking about it I find it a really intriguing idea. As per Siobhan’s/Chloe’s discussion above I think it does run the risk of jeopardising the grieving process but I think it can do a lot of good for people’s mental health too.
If griefbots become a genuine therapeutic practice in the future do you think it’s something that will become subject to heavy regulations (or would benefit from regulations) so that it is not abused? (Ie. griefbots can only be activated with the recommendation of a qualified expert, griefbots can only remain active for X amount of days, etc.) Or do you think that imposing such regulations on someone’s personal grieving process would be more detrimental than beneficial?
Based on the research we did, all of us agreed that it should be something that needs regulating. As we discussed in our concerns section of the report, we believe that there is the opportunity for companies to exploit the vulnerability of people experiencing grief in order to make profit and this could easily be done through a lack of regulations. Therefore, in order to see positive results, there must be rules put in place such as only being used when recommended, turning it off after a specific time, etc.
Could you see a future where this program would have a domino effect, negatively affecting society?
On one hand, you would see people neglecting responsibilities and self care to spend time with the bots. Then on the other hand you would see people killing themselves in multiple jobs and dangerous activities to be able to pay for the service. Or do you reckon some insurance policies would cover this for a certain period?
Good question. We definitely discussed this as a group, and the more we thought about it, the more negatives, especially psychological, began to resurface. Certainly after first getting the bot, much like a new pet or baby, I imagine many would make them their number one priority and neglect everything else to spend time with it. And whilst we hinted at the company offering a free therapy service on the side, that isn’t to say they aren’t doing it for their own benefit and encouraging these people to keep updating their subscription instead of turning the bot off for good. I suppose, depending on the cause of death of the subject, I don’t see why certain insurance companies wouldn’t jump at the chance to earn some extra money!
I really enjoyed this! It’s really interesting to question whether griefbots would actually help with your grief or if it would only ‘pause’ it until the griefbot was shut down.
Also, do you think that the fact that people often aren’t authentically themselves online could alter the digital footprints of people, leading to the ‘wrong’ personality being downloaded into these griefbots?
This is a very interesting question. We didn’t really explore this possibility when thinking of this future, but I can see how this could easily happen.
We did think about the possibility of the AI learning and growing to become such a different version of the person it should be standing in for to start questioning the morality of the whole thing. In general, we found ourselves in a grey area of ethics, stuck between right and wrong, and moral responsibility towards the living and the dead.
This was a brilliant presentation, and indeed is a scary concept.
Rather than delaying the 5 stages of grief for a period of time do you not think that a griefbot might indefinitely put grief on hold? would a someone heartbroken by the loss of a loved one ever be able to turn off a grief bot?
Also could technological advances ever devalue life? what if griefbots essentially became replicas?
Yes, this is one of the questions we explored the most. The risk of getting people stuck in a perpetual state of grief is something that we thought could happen. In our future scenario building we thought of a counselling service offered by the company to help move on and know when it is time to deactivate the griefbot (adding some dystopic elements).
For devaluing life, it is another difficult concept. We did ask ourselves what would happen if a griefbot existed for longer than the human version did, if that would take away their humanity. It could be solved in part by keeping the griefbot as a coping tool that will be deactivated as soon as it is not needed anymore, but for profit, it is also easy to see how companies would try to keep them around as much as they can. In the most dystopic version of the future, this could end up being unethical and immoral towards the people that were once alive.