It’s one thing to talk about putting artificial intelligence to work in your agency. It’s another to actually get down to the business of doing it.
These days, everyone is talking about AI. Vendors come at you with products and services, and leaders talk about solving problems – but the practical business of identifying use cases, finding and experimenting with tools and setting up pilots where you can develop knowledge and experience, present hurdles that can stymie even highly experienced organizations.
“AI is a totally new paradigm,” says Marc Hamilton, vice president for solutions architecture and engineering at NVIDIA. “It’s not just that it’s a new technology with new tools and challenges; the first big shift is in understanding that instead of writing code in Python or Java, you’re putting information to work to generate code for you.
“You don't actually write if-then-else code anymore,” Hamilton explains. “The data actually is the code.”
Traditionally, software engineers seeking to automate a process will break down the process into steps, then write a series of if-then-else statements that drive the process. A standard central processing unit (CPU) then executes that code and churns out answers. With AI, on the other hand, the data, in combination with a deep-learning framework, generates the software code, which runs instead on one or more graphics processor units (GPUs), which are faster, scalable processors that can operate in parallel to crunch more data faster than conventional hardware.
“While all the deep learning frameworks can run on a CPU,” Hamilton explains, “practically speaking it just takes too long. CPUs are too slow.”
Take a basic image-recognition problem in which the aim is to train a machine to identify images that contain certain patterns. This might be to alert maintenance whenever a vehicle or building needs repairs, for example, or to flag potential tumors in medical images.
“To train a deep neural network to recognize what's in a particular image or to classify an image, you typically need about 300,000 samples,” Hamilton says. “With conventional CPUs, that might take a month.”
That might not sound so bad at first, but then reality sets in: Training a neural network is an iterative process. It can take 100 cycles to fully train an AI system using a CPU – 100 months, or just over eight years. By contrast, using a set of four GPUs, you can train that same system to the same level of accuracy in only about 100 days – less than four months.
That’s transformational. And it’s exactly what propelled AI from academic labs into mainstream use across every industry.
The biggest hurdle government agencies face today in trying to operationalize AI is fear: fear of failure, fear of not understanding how to make progress, fear of not having the right expertise and talent in place, even fear of missing out – that the private sector will have AI totally figured out long before the federal government can come up to speed.
Be not afraid, says Hamilton. This is doable. Here’s how:
1. Don’t be intimidated.
“It absolutely isn't true that you need to have people with PhDs in AI working for your organization to get started,” Hamilton says. “Really, what an agency has to do is to learn how to use these new classes of tools.”
Practitioners do not have to learn a programming language or become adept and agile programmers. Rather, they need to learn to use framework products like TensorFlow from Google, Microsoft’s Cognitive Toolkit or PyTorch in the Open Source community.
“It's not that you're creating new fundamental PhD research,” he adds. “You're using all the research that's been done over the last several decades on AI, then you're applying your domain expertise to your problem set, by means of one of those tools.”
In other words, you can do this.
2. Go to school.
You don’t need to go to a university and start hiring data science PhD’s right out of school. But you should think about bringing at least a few people up to speed on the technology and the art of the possible.
The Deep Learning Institute founded by NVIDIA provides a series of short workshops that provide the kind of hands-on training that developers, data scientists, and researchers are looking for to build the basic understanding needed to get started.
Through self-paced, online labs and instructor-led workshops, the Deep Learning Institute teaches the latest techniques for designing, training, and deploying neural networks.
“After just a one-day class, users come out having developed a very simple deep neural network, giving them the basic skills and confidence that they can take the next step,” Hamilton says. “We don't explain all the PhD level math of what's going on behind the scenes, but you will learn enough that you can go back to your organization and start making connections about where AI could help.”
3. Think broadly.
“AI isn’t just for image recognition,” Hamilton says. Its uses are as broad as the data sets you touch.
Remember, data is just zeros and ones. Just about everything can be reduced to zeros and ones. It’s pictures, it's sound, it’s voice, it’s Internet of Things data, it’s structured, it’s unstructured.
Datasets can also be combined. The greatest insights will come not from analyzing a single dataset in isolation, but in analyzing multiple datasets in aggregate.
Indeed, there’s a risk your data has value you don’t understand. “People say, ‘Well, I have a little 50-cent sensor, and I can track the data it produces on my own - I don't really need AI for that,’” Hamilton says. But that may miss the point. You may have a thousand of those sensors, and the sum total of the information they collect could be highly telling as deep learning looks for patterns across the datasets.
4. You’re not alone.
There’s little to stop anyone from buying some cloud compute and storage capacity and starting to experiment. But there’s a big leap from little experiments to full-scale solutions.
Before committing large sums to major hardware or cloud purchases, it pays to partner with others who have deep experience with this technology and the expertise to help your agency succeed.
Dell EMC and NVIDIA, for example, have a High-Performance Computing and AI Innovation Lab with more than 1,000 servers built to demonstrate AI solutions in a realistic computing environment. “It’s an amazing proof-of-concept center, fully equipped with not only the latest Dell servers and the NVIDIA GPUs, but with high-speed storage, and staffed by Dell and NVIDIA AI experts who are there to help others develop viable and valuable proofs of concept,” Hamilton says.
Start small – you don’t have to hit a home run your first time out. Instead, invest a little to learn a lot, Hamilton says.
“You don’t need to get to 99 percent accuracy to prove a concept is worthwhile. So, you don’t have to have 300,000 data samples before you can move ahead.”
By starting with a fraction of that, you can demonstrate and prove your concept with a smaller investment in time, effort and technology. By doing so, you’ll be better prepared to make smart investments later.
For NVIDIA, that’s critical because the company’s focus right now is about helping customers understand what’s possible with today’s technology.