Its hard to deny the impact of newly introduced models such as Open AI's ChatGPT, DALLE2 or Googles BARD AI. See that Blog post on Eldorado? ChatGPT 3.5 wrote that, and DALLE2 generated those images. Heres my perspective on the recent hype regarding these models.
My emotions this entire week have been very mixed. I initially felt, as most software engineers would - somewhat intimidated. This thing can answer the most convoluted programming questions I had, even when the use case was particularly niche. If this model can produce information to such an extent, why should I even bother continuing to develop as an engineer? Its only a matter of time before models can be utilized to execute most L2-L3 tasks. It was especially disheartening because I feel like I haven't been given the chance to properly establish myself within the industry.
What made me fall in love with software engineering and programming in general is that there is always a new level to reach, skill to learn and chance to level-up. This impression was soured by this bot that has in its infancy, seemingly surpassed me in many aspects.
I asked it random questions: "What is best practice for storing auth tokens?"
Then I began by getting the model to complete random basic tasks: "Write me a blue background in TailwindCSS that looks like a code moving".
To more complicated tasks: "How can I program a real-time chat interface in Next.js?".
Interestingly, looking at its frontend code generation - it sucked, and thats coming from me, a backend engineer. When it came to general questions, no matter how complicated they were, it gave me detailed explanations with code examples on how to achieve my goal. It would even help me debug code errors that I ran into, instead of me reading through stack traces, Chat would do the tedious work and tell me in layman terms what the program was angry about.
However, I also noticed that the model would often give me wrong information in a confident manner. I had to often correct it. This got me into some annoying debugging sessions, because initially I didn't question Chats information. The more I used it, my fears about getting replaced were eased. There's no way this model can give important information in a complex system, especially, when there was an unknown probability it would fail. So as an engineer, my future is safe for now.
I've come to realise, I had the wrong attitude towards using the model. Its not an all-knowing entity. Its more like a really advanced search engine or a smart wingman. A program that you could rely on to save time on tedious tasks and give you curated information on the fly. Its fun to think about it as a personal assistant, I even find myself adding my own personality when I interact with it - but its not capable of true general intelligence yet.
"Hey chazza, give me a list of native hawks in Australia"
Lastly, while I am optimistic about the future prospects, new programs and innovation these models will introduce, I must acknowledge the concerns. There is the obvious concern of the privitization of AI trained on public data, I feel that while big tech companies are entitled to monetization of these models - there should be an effort to make it accessible to people of all socio-economic backgrounds. There is also alot of legal, cybersecurity and ethical concerns that come with the wide adoption of AI. Luckily, im just an engineer, what would I know!
Thanks for reading!