Showing posts with label NetApp Exam Questions. Show all posts
Showing posts with label NetApp Exam Questions. Show all posts

Thursday, February 14, 2019

Three Tips for Cloud Service Provider Success - NetApp Certification


Gaining a competitive advantage as a cloud and hosting service provider is challenging in today’s market. Developing fully connected customers is a critical piece of setting your company apart.

So, how do you start? Many cloud service providers overlook some seemingly simple but crucial steps to increase value with customers. To share these effective but underused tactics, NetApp® Global Consulting Principal Mara McMahon has crafted a free white paper, Building Fully Connected Service Provider Customers.

Here’s a quick overview.

1. Make a Good First Impression


Making a good first impression sounds simple, but it’s extremely important. Showing up on time for meetings and for calls and demonstrating that you’re engaged all show your prospect that you respect his or her time.

And an even more important part of making a first good impression, McMahon notes, is price quoting. “Because it’s the first process-based and business-based interaction with the customer—and one that can be a challenge for many service providers—standard pricing that you can present on demand is a must,” says McMahon. By sharing price lists and by creating custom pricing sheets that are based on standard pricing deviations, you show your customers that you’re willing to take extra steps for them.

2. Make It Easy for Your Customers to Spend Their Money with You


Again, this advice might seem obvious, but many service provider pricing models don’t follow it. You should help your customers understand what their costs are and how they will be billed, then be sure to follow through. Keep in mind that it should be easy for customers to upgrade their service or to buy more services from you. Build your process with the primary goal of having your customers feel a sense of ease and belonging. After you accomplish that goal, apply corporate risk mitigation practices like credit checks and paper work.

3. Help Your Customers Maximize Their Return on Investment in Your Services


Without maximized return on investment (ROI), your customers won’t have confidence in your abilities to meet their goals. Make sure that you identify opportunities that help your customers get better results than anticipated and offer customers a path for achieving these opportunities. McMahon suggests that you:

  • Provide examples of best practices that have worked for others.
  • Schedule regular check-ins with customers to confirm that your customers are realizing the outcomes that they want.
  • Conduct business reviews on a regular schedule, such as monthly or quarterly.


Development of fully connected customers is a critical step in maximizing your own revenue. McMahon’s white paper offers more in-depth information and more tips to help you succeed. Start working toward maximizing your customers’ ROI and gaining a competitive advantage for your own business.


Secret To Pass NetApp Certification Exams In First Attempt



Thursday, January 31, 2019

How Casinos Are Gambling With Their Video Surveillance Storage - NetApp Certifications


Here’s some free advice: Don’t try to rob a casino.

It’s no secret that casinos are some of the most watched places on the planet. In a casino, hundreds of cameras are installed throughout every square inch of the building, giving security personnel high-definition, 24/7 visibility into what every patron and every employee is doing at any given time. They know which games you’re playing and for how long. They can track how much money you’re winning (or, more likely, losing), and what you do after you leave the table. They can even use advanced AI and analytics to predict your next move so that they can accurately staff bars and tables based on real-time activity.

They’re looking for card counters, whose disguises (glasses, mustaches, wigs, and hats) are no match for their facial recognition technology. (Hot tip: If you’re counting cards and the casino hasn’t kicked you out yet, it’s not because you’re really good. It’s because you’re really bad and they’ve decided to let you stay to lose more money.)

They’re looking for people with gambling addictions who have placed themselves on exclusion lists and can actually sue the casino if they’re allowed to play. And they’re looking for criminals who aren’t allowed in casinos by law.

Admittedly, much of the video captured by casinos is used after the fact for evidence. So even if you do succeed in pulling off an Ocean’s 11 heist, it probably won’t be long until the video is used to track you down.

Cameras are essential to the 24/7 operation of a casino. If even one camera goes down over a game table, the casino must shut down that table. If several cameras go down, they must close the entire floor, potentially losing thousands of dollars in revenue, to say nothing of the damage to their reputation. In the event of a shutdown, a casino can even be penalized by regulatory agencies, with fines reaching into the millions.

The Casino Storage Paradox


Yet, even with millions of dollars at stake, many casinos are still running outdated and unreliable video surveillance storage. They pump money into cameras and analytics software but prop them up with cheap, commodity storage. Traditional video deployments with low-cost, white-box digital video recorders (DVRs) are not only prone to failure, they’re also expensive to manage and extremely difficult to scale.

If you put cheap tires on a high-performance race car, you’re going to have a bad time. 

Casinos that aren’t thinking about the storage that their video surveillance infrastructure is running on are putting their reputations and their businesses at risk. With NetApp® E-Series systems, casinos don’t have to gamble with their video surveillance infrastructure. The NetApp E-Series video surveillance storage solution is designed for the highest levels of reliability, speed, and scalability. Easy manageability and low total cost of ownership make it a perfect choice for cost-conscious casinos.

Our experts say about NetApp Certification Exams



Sunday, January 20, 2019

NetApp CSO 2019 Perspectives - NetApp Certifications


As we enter 2019, what stands out is how trends in business and technology are connected by common themes. For example, AI is at the heart of trends in development, data management, and delivery of applications and services at the edge, core, and cloud. Also essential are containerization as a critical enabling technology and the increasing intelligence of IoT devices at the edge. Navigating the tempests of transformation are developers, whose requirements are driving the rapid creation of new paradigms and technologies that they must then master in pursuit of long-term competitive advantage.

1) AI projects must prove themselves first in the clouds

Still at an early stage of development, AI technologies will see action in an explosion of new projects, the majority of which will begin in public clouds.

A rapidly growing body of AI software and service tools – mostly in the cloud – will make early AI development, experimentation and testing easier and easier. This will enable AI applications to deliver high performance and scalability, both on and off premises, and support multiple data access protocols and varied new data formats. Accordingly, the infrastructure supporting AI workloads will also have to be fast, resilient and automated and it must support the movement of workloads within and among multiple clouds and on and off premises. As AI becomes the next battleground for infrastructure vendors, most new development will use the cloud as a proving ground.

2) IoT: Don’t phone home. Figure it out.

Edge devices will get smarter and more capable of making processing and application decisions in real time.

Traditional Internet of Things (IoT) devices have been built around an inherent “phone home” paradigm: collect data, send it for processing, wait for instructions. But even with the advent of 5G networks, real-time decisions can’t wait for data to make the round trip to a cloud or data center and back, plus the rate of data growth is increasing. As a result, data processing will have to happen close to the consumer and this will intensify the demand for more data processing capabilities at the edge. IoT devices and applications – with built-in services such as data analysis and data reduction – will get better, faster and smarter about deciding what data requires immediate action, what data gets sent home to the core or to the cloud, and even what data can be discarded.

3) Automagically, please

The demand for highly simplified IT services will drive continued abstraction of IT resources and the commoditization of data services.

Remember when car ads began boasting that your first tune up would be at 100,000 miles? (Well, it eventually became sort of true.) Point is, hardly anyone’s spending weekends changing their own oil or spark plugs or adjusting timing belts anymore. You turn on the car, it runs. You don’t have to think about it until you get a message saying something needs attention. Pretty simple. The same expectations are developing for IT infrastructure, starting with storage and data management: developers and practitioners don’t want to think about it, they just want it to work. “Automagically,” please. Especially with containerization and “server-less” technologies, the trend toward abstraction of individual systems and services will drive IT architects to design for data and data processing and to build hybrid, multi-cloud data fabrics rather than just data centers. With the application of predictive technologies and diagnostics, decision makers will rely more and more on extremely robust yet “invisible” data services that deliver data when and where it’s needed, wherever it lives. These new capabilities will also automate the brokerage of infrastructure services as dynamic commodities and the shuttling of containers and workloads to and from the most efficient service provider solutions for the job.

4) Building for multi-cloud will be a choice

Hybrid, multi-cloud will be the default IT architecture for most larger organizations while others will choose the simplicity and consistency of a single cloud provider.

Containers will make workloads extremely portable. But data itself can be far less portable than compute and application resources and that affects the portability of runtime environments. Even if you solve for data gravity, data consistency, data protection, data security and all that, you can still face the problem of platform lock-in and cloud provider-specific services that you’re writing against, which are not portable across clouds at all. As a result, smaller organizations will either develop in-house capabilities as an alternative to cloud service providers, or they’ll choose the simplicity, optimization and hands-off management that come from buying into a single cloud provider. And you can count on service providers to develop new differentiators to reward those who choose lock-in. On the other hand, larger organizations will demand the flexibility, neutrality and cost-effectiveness of being able to move applications between clouds. They’ll leverage containers and data fabrics to break lock-in, to ensure total portability, and to control their own destiny. Whatever path they choose, organizations of all sizes will need to develop policies and practices to get the most out of their choice.

5) The container promise: really cool new stuff

Container-based cloud orchestration will enable true hybrid cloud application development.

Containers promise, among other things, freedom from vendor lock-in. While containerization technologies like Docker will continue to have relevance, the de facto standard for multi-cloud application development (at the risk of stating the obvious) will be Kubernetes. But here’s the cool stuff… New container-based cloud orchestration technologies will enable true hybrid cloud application development, which means new development will produce applications for both public and on-premises use cases: no more porting applications back and forth. This will make it easier and easier to move workloads to where data is being generated rather than what has traditionally been the other way around.

Our experts say about NetApp Certification Exams



Tuesday, January 8, 2019

Exploring the Infinite Possibilities of AI with NetApp Certifications


“I think we are entering the golden decades of artificial intelligence,” Dave Hitz, NetApp’s founder and EVP, shared during an interview at NetApp Insight 2018 in Las Vegas. Futurist Gerd Leonhard agreed during his keynote: “Humanity will change more in the next 20 years than in the previous 300 years.”

How do companies make the most of their data in this brave new world of AI? The right partners hold the key. During Insight 2018, we explored the life-changing power of AI with leaders from NVIDIA and WuXiNextCODE.

The Big Data AI Train has Left the Station


NetApp’s own business model has shifted from storage to data and the cloud, and AI will be a driving force in our continued evolution. Bharat Badrinath, NetApp’s VP of Product and Solutions Marketing, shared, “AI has a profound benefit of changing how our customers operate; their entire operations have been transformed dramatically overnight.”

As Renee Yao, NVIDIA’s Senior Product Marketing Manager of Deep Learning & AI Systems, noted at Insight 2018, “We need to learn as fast and as much as we can. We can’t let the competition determine where our limit is; instead [we should only be limited by] what is possible—that is a fundamental mindset change in this AI revolution.”

Yao explained that the era of collecting and storing big data laid the foundation for this moment. Now, with the computational abilities of AI and deep learning, companies can process big data and optimize their systems in ways previously impossible.

She shared how NVIDIA, in partnership with NetApp, helped the Swiss Federal Railroad manage a system that carries more than a million passengers over 3,232 kilometers of track. The railway routes more than 10,000 trains a day, often traveling at up to 160 kilometers an hour.

Yao noted that, as a single train runs through the system, it passes 11 switches that move trains from one track to another along a route. Those 11 switches provide 30 different possible ways of routing a train. Add a second train to the system and the possible routes multiply to 900 combinations. By the 80th train, the number of possible route combinations is ten to the power of 80. “That’s more route combinations than the number of observed atoms in the universe,” Yao said, putting things into perspective.

Now, imagine safely routing the railway’s 10,000+ daily trains. “That’s more possibilities and more data than a human can calculate,” she said. Yet the railroad’s interworking switch system must ensure that all of those trains reach their destinations without colliding.

With NVIDIA DGX1 Station, a purpose-built AI workstation, the Swiss Federal Railroad is now able to simulate an entire day of train routes in just 17 seconds. Through AI, the railroad can simulate and map an unfathomable number of possible routes in less time than it takes to reheat a slice of pizza.

Must Have: Collaboration Between IT and Data Science Teams


This level of interaction between big data and AI requires an even closer, more in-step collaboration between a company’s IT and data science teams. Currently, data scientists often wait around as the IT team architects and tests new infrastructures. Likewise, when IT can’t anticipate the infrastructure needs of the data scientists, innovation can grind to a halt.

“No one can afford to be reactive,” Badrinath says. “Data scientists want to do the activation, but they can’t just go to the infrastructure team and say, ‘Hey! This is my workload—do something about it.'”

To help resolve this stalemate, NetApp and NVIDIA partnered to streamline organizational collaboration by bridging data and the cloud. Over the course of a year, NetApp integrated its systems with NVIDIA’s DGX1 supercomputer to create a single package for NetApp customers. This makes it easier for companies to deploy AI in a pre-validated system without requiring time-consuming handoffs across silos. By bringing IT and data science teams together, data and AI can interact seamlessly to fuel business innovation.

Genomics Data + AI = A Life-changing Solution


WuXiNextCODE showcases what this kind of collaboration could mean for our future. The company uses genomic sequence data to improve human health and knew that AI could unearth valuable—even life-saving— insights hidden in that data. They came to NetApp and NVIDIA with 20 years of historical data, totaling nearly 15 petabytes, but only two staff members to manage it all.

Using AI systems developed through the NetApp / NVIDIA partnership, WuXiNextCODE can produce huge simulations and rapid-fire queries to measure the impact of genetic mutations.

“In the past, sequencing was very slow,” Dr. Hákon Guðbjartsson, Chief Informatics Officer at WuXiNextCODE, shared during Insight 2018. He explained that a pediatrician would have to guess what gene was involved in a disease. But now, doctors can begin to narrow down possible mutations based on AI-processed sequence data.

There’s still a way to go, however: “Today, this process needs to be much more data-driven,” Dr. Guðbjartsson said. “You have millions of variants in each given individual, so you need more automation.” AI offers the way forward.

With the right partnerships and collaboration between IT and data science, innovators like WuXiNextCODE can harness the power of AI to do in seconds what previously couldn’t be done in a lifetime. As we stand at the brink of this new golden age, we’re more excited than ever about the infinite possibilities of AI.

Our experts say about NetApp Certification Exams



Thursday, December 20, 2018

Understanding the Concepts of Artificial Intelligence (AI)


A short swing down to the depths of neuronal nets


In artificial intelligence (AI) there is sometimes a confusion of language that can be easily prevented. AI in the narrower sense (also called “Strong AI”) aims to develop machines that act as intelligently as humans – but this is an academic vision out of reach. Instead, let’s focus on the “Weak AI” – Machine Learning (ML). Most AI developers would rather describe their specialty as ML. Others may have Reasoning, Natural Language Processing (NLP) or Planning (automated planning) in their email signature. Like ML, these terms can also be understood as AI subareas. How closely the technologies are interwoven is illustrated by the highly acclaimed Google Translator: since the end of 2016, the system has achieved surprisingly good results for some of the 103 supported languages because it translates entire sentences context-sensitively. This is due to the use of neural networks, which have generally improved speech recognition.

Neural networks play in their own league


Artificial neural networks, as I have already reported on in a previous post, are the heart of ML systems. This is a mathematical abstraction of information processing, similar to how it takes place in the brain. A neuron is modeled as a function (algorithm), including input, parameters and output. Photos, text, videos or audio files are used as data input. During learning, the parameters are changed. By modifying the weighting, the model learns what is important and what is not, independently recognizes patterns and thus increasingly delivers better results. After the learning phase, the system can also evaluate unknown elements.

In contrast to neural networks, expert systems – another AI subarea – do not teach themselves anything. They process large amounts of data and are connected to databases or datalakes. For data access, experts usually have to program filters. The fact that expert systems also have a learning capacity was demonstrated, for example, in chess: in 1997, the Deep Blue system developed by IBM defeated the legendary chess world champion Garri Kasparow in six games. In the much more complex board game Go, neuronal nets had to be used. It was only thanks to self-learning processes that the AlphaGo system was able to beat Lee Sedol, probably the world’s best Go player, four to one in five games in March 2016.

From Machine Learning to Deep Learning


In the past, ML systems used to work with an upstream feature recognition system called Feature Engineering. While the task was to recognize a face, the system initially searched for indispensable features such as eyes, nose or mouth. Today’s Deep Learning (DL), on which the German computer scientist Jürgen Schmidhuber, among others, worked, opens up completely new dimensions. Such a neural network consists of different layers with artificial neurons. From input to output, a query in each layer usually goes through a very simple operation, for example the application of a filter. The neurons may identify image features, as was the case with feature engineering in the past. However, the model is largely left to itself. It can therefore decide for itself which type of elements it best analyzes or extracts to predict the content of the image as well as possible. The layers give the neural networks a greater depth.

Great computers are available for ML today, but until recently the limitation was in reading and writing data (I/O) from storage media. With the introduction of the NVMe mass storage interface and 100 Gigabit Ethernet in 2017/2018, this hurdle fell as well. NVIDIA and NetApp have demonstrated what is now possible by combining a DGX supercomputer with the AFF A800 all-flash storage system as a “converged infrastructure”. The figures for the ONTAP AI Proven Architecture solution are also impressive: in addition to a latency of less than 500 microseconds, users can achieve a throughput of up to 25 GB/s. This allows a 24 node cluster to analyze more than 60,000 training images per second (ResNet50 with Tensor Core). The solution can be an option, innovation or vision. These terms do not all interpret equally, which is a good thing.

Our experts say about NetApp Certification Exams



Monday, December 10, 2018

NetApp Cloud Volumes for GCP Expansion Into Europe


When moving workloads to the cloud, one challenge that companies face is access to high-performance, scalable, and shared file systems (NFS / SMB) that many applications need. This week at NetApp’s European customer and partner event, Insight Barcelona, we announced new capabilities designed to help customers in Europe access the shared file storage service for Google Cloud Platform.  With the expansion of NetApp® Cloud Volumes Service for GCP into Europe coming soon (early Q1 2019) even more customers will be able to deploy the jointly developed and Google supported file storage service.

The NetApp and Google Cloud partnership combines NetApp’s world-class data services with Google Cloud Platform’s global infrastructure built around regions. The service, NetApp Cloud Volumes Service for Google Cloud Platform, handles the configuring and managing of the storage infrastructure. Google Cloud Platform gives you complete control and ownership of the region in which your data is physically located, making it easy to meet regional compliance and data residency requirements. Cloud Volumes Service for Google Cloud Platform is a great fit for application owners of UNIX, Linux and Windows applications, line-of-business (LOB) owners, database administrators, and cloud architects who consume storage capacity but don’t want to be storage administrators.

As our global customers expand their cloud presence, we will continue to provide the services they want in the global regions they need. Today, Cloud Volumes Service for Google Cloud Platform can be found in these regions:


  • us-east4
  • us-central1 – this is an extremely popular region for European multi-national companies

Our experts say about NetApp Certification Exams



Three Tips for Cloud Service Provider Success - NetApp Certification

Gaining a competitive advantage as a cloud and hosting service provider is challenging in today’s market. Developing fully connected cu...