At this time, I’m thinking about breaking away from my usual customs, from my profession, or even from my generic pretense. I would like to speak about what is happening as if I were Nostradamus and let’s start from the end, or from the middle of the story.
I am sure you will begin recruiting candidates shortly, and after some time, you will probably realize they are not as good as you had hoped.
Yes, even you.
Ok, you teaser, what do you mean?
It is difficult for me to estimate the order of magnitude or the time spans – some managers will be able to see the reason relatively quickly (I can wager that technologically advanced managers will be much more aware of this) while some will not be able to see it at all.
There will be those who do not see the reason and will do what everyone who does not see the reason does, which is to guess other solutions immediately, to fall into despair (“Oh, we just can’t find any qualified analysts these days!”) or just to blame cosmic factors for the problem.
Sure, there will be some solutions that work, some better, some less well – but in the end? At the very end of everything?
The industry will benefit greatly from it.
Oh boy! I’m in – what are you trying to say?
In fact, everything I’m saying has its roots in our friend chatGPT. This is because most of the tests we ask data people to go through when we are evaluating them are just awkwardly solvable in the hands of this service.
This is because they do not usually assess analytical abilities.
Now wait here, young man, are you insinuating something?
Trying not to antagonize anyone here, yes, the test you give at your workplace tests your analytical abilities of analysts. This post isn’t about you.
“Can you tell me what chatGPT is?”
ChatGPT is a large language model developed by OpenAI. It has been trained on a huge amount of text data, allowing it to generate human-like text and responses to questions.
ChatGPT can be used in a variety of ways, including answering questions and engaging in natural language conversations. It has been used in chatbots and other applications that require the ability to generate human-like responses.
(Just for the record, if there was any doubt – I wrote this paragraph using chatGPT)
The trouble with testing technical aptitude remotely is that I, as a candidate, can take the questions, throw them into the prompt exactly the way they are, and voila! I have passed the test.
The “solution” I submitted compiles, generates answers, and, in general, passes my assessment.
Don’t believe me?
Take your test and put it to the test – does ChatGPT pass it using a “dumb copy”?
If not – Great! Now try smart copying, or fiddling with the question.
Did ChatGPT pass your assessment?
you have a Problem.
Using ChatGPT like that is cheating!
On a more general note, if we consider the use of ChatGpt to be cheating, then yes, it is true that those who are harmed by the process will be the ones who did not cheat, but I am not sure that this should be our approach, at least not in the long run – just like we do not view the use of Google as cheating (at least, some of us do not, any longer).
If our role as interviewers is to simulate the potential workplace and day to day work for the candidate, should we ban the use of ChatGPT at work?
Will this simulate how things could be in the future for them? In all likelihood, no.
And because none of us will give up ChatGPT 🙂
Ok, So what is our takeaway?
As for what will happen in the immediate term, I would argue that a few things will take place:
In order to define a competent technologist in the future, we will have to rethink the whole concept – once it is no longer writing code, or writing efficient code, then what is the definition of a skilled technologist?
Is it going to be a “prompt engineer”?
We, as data people, should be able to make it simple – after all, the code was supposed to be what allowed us to get the statue trapped inside the stone block out, or something like that. However, even then, our tendency towards simpler solutions (have you solved the technical question or not) is evident, and we will fail by that tendency.
Ideally, there will be fewer home tests and more tests in the workplace – if we realize that one of our problems with technical assessment is the free use of ChatGPT then we will revert back to the method of inviting candidates to come to us to be tested.
It is expected that our exam review systems, at the very least the technical ones, will become more closed and less likely to allow copying from the Internet (so, yeah, no problem if you use it, but you will have to write your solution as if it were 1959).
In the long term – and within the framework of all the changes we will see in the emerging market that will utilize the services of ChatGPT- we will simply have to produce better tests, while at the same time understanding that our entire recruitment process should look and feel differently (and I say this with the highest level of certainty that I am capable of recruiting and it is quite small, since I am not really Nostradamus).
What do you think?