Excuse the data reference, but it's one I occasionally come back to. Back when jQuery was in its prime, there were two schools of thought about what front-end web developers should do.
1. Do you learn JavaScript so you can do "anything"? or
2. Do you "just" learn JQuery?
If you take the second option, and need to do something jQuery can't, you can probably find a blog post somewhere where someone else has figured out a solution. Or if you find a bug in jQuery itself, you can wait for someone else to fix it.
However, if you take the first option, you gain deeper and broader knowledge that can help see things in different ways and to know when jQuery may not be suitable, or will help when the next tool of choice comes along. Hey, you may even be someone who helps build the next great framework or write the blog posts that the other jQuery developers find when they get stuck.
At a high level, it's the same as asking:
- Do you learn a language or an API?
- Do you learn all about how to do a task, or how to use a single tool that can do most of the things related to the task?
Short term, the "smart" answer is to do the simple thing.
Long term, the broader knowledge is, theoretically at least, more useful.
However, when it comes to recruitment, suitability is often measured in terms of the length of time spent using a specific tool.
"Do you have X years of experience using Y product? - If not, you're not suitable."
But then I might have 5X years of experience with multiple products, including Y and competitors/alternatives, plus I'm considered by some to be an expert in the field.
However, recruiters (and their automated CV parsing tools) don't see me as suitably experienced.
Maybe it's better to only work with quantifiable tools in measurable ways.
But what about AI, as I promised in the title?
I think there's a comparison with the use of AI/LLMs/Copilots/Agents/etc. when helping with coding.
I wanted to do something with a technology I wasn't very familiar with.
I started by working through the documentation and was learning some things. But, it quickly became apparent that I'd need to spend many hours to get a full understanding of the technology and gain the knowledge to create the best version of the solution I needed.
So, I wondered if GitHub Copilot could do it for me.
I opened the relevant file, selected some relevant lines and asked it how to do it the different/better/shorter/generic way that I wanted.
It quickly produced an example, and I was able to adjust this to get exactly what I needed. At least to the point where it appears to work.
But here's the catch. I don't know enough to know if this is the "best" way to do what I want.
- Is "appears to work" good enough?
- Are there alternate solutions?
- What's the difference between the different approaches?
- And the pros and cons of each?
- Are there any edge cases I need to account for that the current code doesn't cover?
- How can I create automated tests for this?
- Can Copilot answer these questions?
Is it acceptable to go with what Copilot gave me? And, if there are issues in the future, to ask Copilot (or future replacement/alternative) how to fix them?
Maybe.
Maybe I'll have to learn more to address any future issues. That's okay, as I know how to learn and have enough wider knowledge and experience to do that.
If all I could do was ask a Copilot how to solve a problem, I might get stuck when the problems get harder.
But, at least I can say I have 2+ years using Copilot to assist my coding...
0 comments:
Post a Comment
I get a lot of comment spam :( - moderation may take a while.