Views, News & more
AI is going to transform IT over the next few years. Here's what to expect:
Microsoft recently announced it would add OpenAI's GPT-4 powered features to Outlook, Word, Excel, PowerPoint and Teams. Google announced it would use its large language model, LaMDA, to power Google Workspace's Gmail, Docs, Sheets and Slides. Salesforce, HubSpot and Grammarly have announced they are adding AI too.
Later this year, you and your end-users may find that your email clients can draft detailed responses. Your word processor may be able to create decent first drafts of formulaic text documents such as corporate policies and job role specifications. Presentation software such as PowerPoint may be able to draft slides by repurposing content from other documents, emails or meeting transcripts.
Some of these tools will even summarise and revise content if asked.
In the longer term, Enterprise search will use AI to move beyond text matching towards concept-based matching, where you don't have to guess the exact words used to find a particular email or file you're looking for.
For decades, a core element of cyber-defence has been watching for content signatures from prior successful attacks and then using those to block subsequent attacks. Unfortunately, generative AI will partially devalue that approach by creating malware code on the fly that hasn't been seen before.
AI's ability to vary text and code means there will be less focus on spotting suspicious content and more on spotting suspicious behaviour on endpoints, in logs and in network traffic.
Defending against such attacks will require the use of AI to parse large volumes of crowd-sourced traffic data for subtle signals. This will generate a stream of threat intelligence updates which will improve firewall, email and web filters. Large security vendors like Fortinet, Palo Alto Networks and Cisco, and large email service providers such as Microsoft and Google will have the scale required to connect the dots from attacks across thousands of clients.
Generative AI will make phishing emails harder to spot - eliminating poor grammar and spelling that used to give such attacks away. AI will be capable of conducting spearphishing attacks at scale, tricking end-users by namedropping customers', partners' and colleagues' names sourced from corporate website scraping, password leaks and LinkedIn.
High-value targets have already been hit by AI-enabled deepfakes, where a CEO's voice is cloned and used to call their firm's Finance department and instruct them to urgently pay a supplier's new account. Some attacks even used AI-generated deepfake video streams.
AI will be used to review files and emails in bulk for personally identifiable information, credit card details etc. This will make it easier for endpoints, email systems and network traffic filtering tools to detect and block data exfiltration attempts that breach IT policies.
Another use of AI will be detecting shadow IT so appropriate interventions can take place.
AI-powered code completion tools such as Github Copilot already allow coders to ship features faster, so such tools will grow in popularity. For now, the resultant code appears to be less secure. As a result, more patches will need to be applied - because the code is more buggy, there will be more code because code completion speeds up programming, and it will be quicker to fix the average bug using code completion tools.
However, as AI coding tools get better at identifying and addressing security vulnerabilities during the software development process, AI may end up improving code quality.
In the long term, AI will help you prioritise your patching efforts by shifting away from generic 'severity' levels to ones that better reflect your organisation's actual risk.
Between a vulnerability being disclosed and a patch becoming available, there is often a window in which the only viable option is to apply mitigation measures. AI may allow these temporary mitigations to be implemented more easily, with the system administrator using prompts to affect the desired changes.
Looking for software support? Large software firms will increasingly try to get you to chat with an AI-powered Support chatbot first.
Sometimes, this will be an improvement on their previous tokenistic attempt at support - an inadequate FAQ and a largely deserted end-user self-help forum. Support chatbots may also be available 24/7, unlike human-staffed support lines.
If you do finally get through to a human via email or livechat, there's a good chance the response you receive will be auto-completed by generative AI offering polite, generic advice.
Longer term, AI will allow IT troubleshooting widgets to apply fixes, not just tell you what's wrong.
If you're at a large organisation, support chatbots may act as your gatekeeper, so you spend less time dealing with simpler end-user queries such as password resets and more on stuff a chatbot can't answer yet.
AI isn't just coming to end-user software like Outlook and Word. It's going to arrive in IT software too, gradually. It will act as your agent, combing through vast amounts of boring data and extracting actionable insights. It may even act on some of these by default unless you adjust your vendors' default settings.
AI will be used to parse security logs and flag potential incidents. It will parse error logs and flag if particular disks or devices appear to be approaching failure. Local networking kit may self-optimise based on analysis of past performance, adjusting traffic prioritisation for certain apps and changing routing choices.
We'll see a similar use of AI to optimise public cloud hosting, so it becomes easier to deliver performance and resilience without running up surprisingly-high bills.
AI will play an important role auto-classifying data, so Endpoint Detection and Response systems can block some attempts to leak data at scale. AI-powered data classification can help with storage tiering choices, predicting which files aren't likely to be accessed often and moving them to cheaper storage tiers automatically.
AI may help with the prioritisation of support tickets, emails and patches, so it's easier to focus on the most important ones first.
Currently, most of the web's technical content is written by humans. That may change, and not for the better. When you search for an answer to a tech question, you may find yourself on answers that sound right but which are technically inept because they've been written by a large language model that doesn't really understand the underlying technology.
Although OpenAI has investigated adding a watermark to its content and has created a tool for checking whether a given piece of text is likely to be AI-generated, that won't work on small amounts of text or AI-generated text that's been deliberately stripped of the usual tell-tale signs of AI use.
Even on reputable sites full of user-generated technical content, there's the problem of some users abusing generative AI to try to appear more knowledgeable than they are.
So the web resources you use to help troubleshoot technical problems may get less useful as AI-generated plausible-sounding rubbish becomes more common.
You're not the average end-user. You - and other savvy users - will be able to get far more use out of AI chatbots by using advanced prompts.
For example, asking for 20 points or 40 points about a topic for use in idea generation, then prompting 'More, and don't repeat yourself' if its ideas are worthwhile. Or using AI to summarise text, so you don't have to read the whole thing. Or supplying your finished proposal/email and asking for the chatbot to point out the flaws. You can ask these chatbots to prepare objections to your proposals from a diverse range of colleagues and prepare polite but convincing rebuttals to those objections - so you're prepared to defend your idea. You can also ask it to point out additional points you've missed or to give you concrete examples of particular points you're making in the abstract.
You can use it to generate interview questions for an IT job candidate's CV and give you advice from an HR professional on how to handle a particular situation. You can use it to check your grammar and spelling. You can ask it to describe digital transformation trends in your industry and explain how broader trends in your industry are likely to impact IT.
You can ask it for excel formulae and to explain why buggy code is generating a particular error rather than the desired behaviour.
Put simply, AI can be the personal assistant you need once you master the dark arts of prompting.
Virtual meetings are increasingly on-the-record because of automatic AI-powered transcription that's optionally available in apps such as Microsoft Teams.
In the past, you were able to discuss things privately once certain participants had left the call. Now it's common for a transcript of the entire call to be made available to all participants. This means you need to be slightly more guarded, as candid internal conversations could be read after the event by those not present. They might also be discoverable in the event of legal action by a customer, a supplier or a regulator, or read later by coworkers with access to the relevant folder.
On a personal level, this is useful, especially if you'd like to skip most of the meeting and get AI to summarise the bits you missed. However, for organisations, the liability considerations may not have sunk in yet.
If you're a Microsoft 365 admin, you can choose whether to turn on 'Allow transcription' in the Teams meeting policy.
AI will change IT recruitment. Recruiters and HR staff will use it to draft job descriptions, find potential applicants, rank applicants and even generate interview questions.
At larger firms, AI might even be used to conduct a pre-interview that can be used for screening.
In terms of your own CV, AI may help you rewrite it and you may need to think more carefully about including keywords and concepts AI systems expect to see on the CVs of ideal candidates.
Although some GPT-3 like large language models can run on a consumer laptop or even a smartphone, more advanced models will require serious computing power when generating results. AI will add to the gravitational pull of the cloud, as big SaaS providers and cloud-based multi-tenant service providers have a distinct advantage when it comes to AI. They may have access to data from thousands or millions of end-users that could help train their models and the ability to compare model predictions against real-world data not used for training. They're able to have thousands of customers provide feedback on the model's results (by whether they accept the model's suggestions). Real-world usage provides a stream of feedback that can be used to judge and improve the model.
For SMEs, generative AI will enable smaller firms to punch above their weight by allowing non-specialists to do things that would previously have required the help of external parties such as freelance graphic designers, web designers or marketing agencies. This will save them money and speed up execution.
COVID lockdowns forced many firms to switch to video meetings powered by Zoom, Microsoft Teams or Google Meet. However, not all employees are thrilled to appear on camera. AI is helping ease this reluctance.
AI is already used widely to replace and blur backgrounds in video meetings.
The awkwardness of having to look at the camera rather than the screen will shortly be fixed by AI. nVidia and Microsoft both plan to offer deepfake eyes on livestream feeds so participants can look as if they're looking directly at the camera, even when they’re looking instead at their computer screen, notebook or phone.
And we're about to see a solution for those who would rather not appear on camera live. Cartoonish avatars lipsynced to participant audio will be coming to Microsoft Teams in May 2023. However, it's already possible to create natural-looking talking avatars based on a static real-world photo of the participant, with AI moving the mouth in sync with live audio. That will mean meeting participants are able to appear on camera, looking their best, without having to actually appear on the livestream. The camera may still be on, tracking head and body movements which can be mapped onto the participant's avatar.
There was an IT skills shortage before AI came along. There will be an IT skills shortage after AI is widely deployed. Greater dependence on AI will mean there's a greater dependence on IT personnel to keep everything running.
AI will reduce the massive and growing cybersecurity skills shortage but this will not lead to job losses. Instead, it will allow organisations to level up their IT security in ways they previously couldn't afford.
SaaS providers, email service providers, managed service providers and managed security service providers are well-placed to implement AI-powered improvements. You can expect such firms to continue to grow their IT workforces faster than end-user organisations and IT consultancies once the US rate hike job losses at VC-funded firms have peaked.
A lot of AI is about providing recommendations and suggestions. But there will continue to be a need for IT professionals who can evaluate that output and check its appropriateness, especially when it comes to cybersecurity and IT optimisation changes that could have major security, performance and cost implications.
If you use Microsoft Outlook, Word, Excel, PowerPoint or Teams, you're in luck. As a Microsoft partner, hSo will be able to provide you with licences for Microsoft 365, including for the use of Microsoft 365 Copilot, once the new offering hits general availability in mid to late 2023.
When it comes to cybersecurity, we're a partner of Fortinet, one of the leading network security vendors. It uses AI to improve threat detection and response, keeping endpoints safe from malware, limiting bad actors' lateral movement across the LAN, and improving traffic filtering. Our network security services help you take advantage of these developments to better protect your systems and data.
To find out more, give us a call on 020 7847 4510.