To detect device In native code, recommand use __system_property_get.
To detect device In native code, recommand use __system_property_get.
This article will talk about how to using Java code to detect device.
Please kindly use ro.product.cpu.abilist and check every entry in it.
recommand “getprop ro.product.cpu.abilist”, which… Read more
As I mentioned in my previous post about writing a vectorized reduction code from Intel vector intrinsics, that part of the code was just the finishing touch on a loop computing squared difference of… Read more
If you were born after 1995, you may barely remember a time when televisions received only the channels your cable provider carried; before there was an Internet or YouTube or streaming video that let you watch whatever you wanted whenever … Read more >
Machine learning holds the promise of not only structuring vast amounts of data but also to create true business intelligence
The sheer volume and unstructured nature of the data generated by billions of connected devices and systems presents significant challenges for those in search of turning this data into insight. For many, machine learning holds the promise of not only structuring this vast amount of data but also to create true business intelligence that can be monetized and leveraged to guide decisions.
In the past, it wasn’t possible or practical to implement machine learning at such a large scale for a variety of reasons. Recently, three major advances have enabled more organizations to take advantage of machine learning to enhance business intelligence:
1) Bigger data (and more importantly, better labeled data)
2) Better hardware throughout datacenters and high performance computing clusters
3) Smarter algorithms that can take advantage of data at this scale and learn from it
Machine learning, generally speaking, refers to a class of algorithms that learn from data, uncover insights, and predict behavior without being explicitly programmed. Machine learning algorithms vary greatly depending on the goal of the enterprise and can include various algorithms targeting classification or anomaly detection, clustering of information, time series prediction such as video and speech and even state-action learning and decision making through the use of reinforcement learning. Ensembling, or combining various types of algorithms, is also common as researchers continue to push the state of the art and attempt to solve new problems. The machine learning arena moves very fast and algorithmic innovation is happening at a blistering pace.
With machine learning, enterprises can generate predictive models in order to accurately make predictions based on data from large, diverse, and dynamic sources such as text and metadata, speech, videos, and sensor information. Machine learning enables the scale, speed, and accuracy needed to uncover never-before identified insights. The promise of accurate, actionable, and predictive models will drive it to play a larger and larger role in business intelligence as data continues to get more and more unmanageable by humans. This enhanced intelligence provides utility in myriad ways across many industries including Health Sciences for medical imaging, financial services for fraud detection, and cloud service providers and social media platforms for services like powering automated “personal assistants,” image detection, and measuring sentiment and trends. There really is no end to the applicability of machine learning.
Banks, as an example, are applying machine learning algorithms to predict the likelihood of mortgage defaults and risk profiles. By retrospectively analyzing historical mortgages and labeling them as either acceptable or in default, a lender could leverage a trailing data set to build a more reliable analytical model that delivers direct and measurable value well into the future. By crafting models like this that learn from historical experiences, banks can more accurately represent mortgage risk, thereby reducing defaults and improving loan profitability rates.
In order to efficiently develop and deploy machine learning algorithms at scale, enterprises can leverage powerful processors like the Intel® Xeon® processor E7 family to deliver real-time analytics services, and open up new data-driven business opportunities. Organizations can also turn to highly parallel Intel® Xeon Phi™ processors to enable dramatic performance gains for highly data parallel algorithms such as the training of Deep Neural Networks. By maintaining a single source code between Intel Xeon processors and Intel Xeon Phi processors, organizations can optimize once for parallelism but maximize performance across architectures.
Of course, taking advantage of the latest hardware parallelism requires updating of the underlying software and general code modernization. This new level of parallelism enables applications to run a given data set in less time, run multiple data sets in a fixed amount of time, or run large-scale data sets that were previously prohibitive. By optimizing code running on Intel Xeon processors we can therefore deliver significantly higher performance for core machine learning functions such as clustering, collaborative filtering, logistic regression, support vector machine training, and deep learning model training and inference resulting in high levels of architectural, cost, and energy efficiencies.
Advances in high performance computing (HPC) and big data infrastructure combined with the computing capabilities of cloud infrastructure are fueling a new era of machine learning, and enabling enterprises to discover valuable insights that can improve their bottom line and customer offerings.
This blog originally appeared on InfoWorld.com.
There isn’t always an expert with all the skills you need.
For businesses, the days of the renaissance person have passed. Someone like a utility infielder who has some experience with a lot of functions used to be quite valuable because you could put them in wherever they were needed. But today’s organizations are so complex that they need people with expertise in very specific areas. These days it pays – frequently quite well – to be a specialist. For example, if you’re a data scientist, IT security expert, or computer engineer, there are people waiting to meet you in HR departments all across the globe.
But expertise can have its limits.
While it is always a good idea to have an expert doing what she or he is expert at, it is wrong to think that there is always an expert with all the skills you need. Especially when it comes to dealing with Big Data. This is one of the most complex, evolving, ambiguous, and important areas of business development today, and many companies are seeking a qualified expert to help them rein it in. But is acquiring one Big Data expert really the best path to success?
We’re at a time when the renaissance person is being replaced by the renaissance team. If you want to have success with your Big Data project, you need a group with many skill sets. So it’s important to recruit individuals with diverse capabilities which complement each other instead of spending your time searching for a mythical individual who can do it all. Of course the team has to include people who know math, statistics, and science – but those skills alone are not enough. After all, you can’t just point data scientists at your data and say “Go find stuff.”
You need to recruit people who can think about data in novel ways. So in addition to people who know about handling data, you also need people who know about your customers: their culture, their psychology, their behavior. To that end, you may want to consider onboarding a sociologist or others from the “soft” sciences.
When putting together your team, first figure out the skills you need and then find the people who have them, regardless of the field they’re in. This kind of team will be able to look at your data in a variety of ways. The people who are experts at handling data will give insights to those who understand your business needs and vice versa.
Having a variety of backgrounds and experiences is essential because there is no single way to interpret or process data. A collaborative, collective approach gives you greater insight into how your data analytics work. The sum of those unique perspectives is greater than the parts, and will continue to feed your decision-making power well into the future.
Of course there are some skills everyone on the team should have. They all need to be creative, able to handle ambiguity, and be effective at communicating. They should ideally possess some familiarity with the other teammates’ primary skill sets. It’s also good if your hard science types are a little unorthodox. A little bit of unconventional thinking can go a long way.
Data is like ore: Unless it is properly refined, shaped, and forged, it’s just a lump of rock. Doing everything needed to find the gold hidden inside is more than we can expect of any one person.
This blog originally appeared on InfoWorld.com.
I’m posing that question somewhat rhetorically. The answer happens to be the theme for Percona* Live 2016 – “Database Performance Matters!” Databases are ubiquitous- if not also invisible, managing to hide in plain sight. Reading this blog? A database was involved when you signed in, and another one served up the actual contents you are reading. Buy something from Starbucks this morning and use their app to pay? I’m not an expert on their infrastructure, but I will hazard a guess that at least one database was involved.
So why the mention of Percona Live 2016? Well, recently I was offered the opportunity to speak at the conference this year. The conference takes place April 18-21. For those able to attend, the session I’m delivering is at 11:30am on April 19th. The session is titled “Performance of Percona Server for MySQL* on Intel® Server Systems using HDDs, SATA SSDs, and NVMe* SSDs as Different Storage Mediums”, creative and lengthy, I know… Without revealing the entirety of the session, I’ll go into a fair amount of it below. I had a framework in mind that involved SSD positioning within MySQL, and set out to do some additional research before putting the proverbial “pen to paper” to see if there was merit. I happened upon a talk from Percona Live 2015 by Peter Zaitsev, CEO of Percona, coincidentally titled “SSD for MySQL”. It’s a quick read, eloquent and concise, and got me thinking- just how much does storage impact database performance? To help understand the answer, I need to offer up a quick definition of storage engines.
Database storage engines are an interesting topic (to me anyway). The basic concept behind them is to take a traditional database and make it function as much as possible like an in-memory database. The end goal being to interact with the underlying storage as little as possible, because working in-memory is admittedly preferred/faster than working with storage. Generally speaking, performance is good and consistent so long as the storage engine doesn’t need more memory than it has been allocated. In situations where allocated memory is insufficient, and these situations do arise, what happens next can make or break an application’s Quality of Service (QoS).
Percona Server, with its XtraDB* storage engine, is a drop-in replacement for MySQL. So, I figured it was time for a quick comparison of different storage solutions behind XtraDB. One aspect I would be looking at is how well XtraDB deals with memory pressure when a database’s working set exceeds the RAM allotted to XtraDB. This can be greatly influenced by the storage subsystem where the database is ultimately persisted.
To simulate these situations, I decided I would run a few benchmarks against Percona Server with its storage engine capped at sizes less than the raw size of the databases used in the benchmarks. This would create the necessary memory pressure to induce interaction with storage. For the storage side of the equation, I decided to compare a RAID of enterprise-class SAS HDDs against a SATA SSD and also against an NVMe SSD. My results are presented as relative to those of the HDD solution. Rather than report raw numbers, the focus here is to highlight the impact storage selection has on performance rather than promote any single configuration as a reference MySQL solution.
I used the following tools to perform the benchmarking:
Moving on to the base server platform:
And the underlying storage configurations tested:
Next the software stack:
The table below recaps some of the high level observations from these tests:
HammerDB TPC-C (NOPM)
HammerDB TPC-H (Run Time)
HammerDB TPC-H (QPH)
Performance gains of up to 53% for SATA
Reduction in in run time up to 23% for SATA
Up to 29% more Queries per Hour for SATA
Performance gains of up to 64% for SATA
Reduction in run time up to 46% for NVMe
Up to 84% more Queries per Hour for NVMe
|Figure 1– HammerDB TPC-C Test: Relative Throughput Compared to HDD HW RAID 10|
|Figure 2– HammerDB TPC-H Test: Relative Run Time Compared to HDD HW RAID 10|
|Figure 3– HammerDB TPC-H Test: Relative Throughput Compared to HDD HW RAID 10|
All in all, this was an interesting (if not fun) exercise. Six HDDs or a single SSD? Relative performance results aside, one should also consider power consumption, reliability, and opportunity cost savings that derive from performance gains over the life time of a hardware platform, as often these can be more substantial than the upfront costs. Speaking of upfront costs, the Percona Live talk itself also addresses the relative upfront cost of each storage configuration, which makes for an interesting conversation when that information is juxtaposed against usable capacity and performance results.
Additional configuration details:
Additional, non-default, configuration parameters for HammerDB and Percona Server for these tests:
For HammerDB with TPC-C Option
For HammerDB with TPC-H Option
Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as HammerDB, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Source: Internal Testing
Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit Intel Performance Benchmark Limitations.
Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase. For more complete information about performance and benchmark results, visit http://www.intel.com/performance.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2016 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.
Enhance Codecs with SHVC for Reliable Viewing Experiences, Use 10-bit AVC to Deliver HDR Content
Intel® Video Pro Analyzer (Intel® VPA) 2016 R4
On the cusp of the NAB Show 2016, Intel delivers an… Read more
At Intel we are have been working on the optimization of the Ceph storage architecture for many of the key IT applications including MySQL, cloud, and big data. There have been a number of enhancements around the use of solid state drives (SSD) for Ceph that has caught the attention of the open source community. This work is important as CIOs see the promise of cloud and rethinking how open source can assist them as a cost effective environment for private cloud deployment.
I was excited to hear that Intel will be sharing our learnings at the Percona Live Data Performance Conference held in Santa Clara April 18-21. Come to Percona Live next week to join others in the open source community and learn from the best minds working on MySQL, NoSQL, cloud, big data and Internet of Things (IoT). Whether you are a DBA, developer or architect you can learn from your peers on the best methods for open source IT services on Ceph.
At this year’s conference, Intel is sponsoring the Data in the Cloud track. Intel solution architects will be giving 3 talks in the Big Data track. And, Reddy Chagam, Intel’s chief Software Defined Storage Architect, will be on the keynote panel. Here is a brief look at the panel and 4 Intel breakout session talks you should plan to attend.
We look forward to seeing you at Percona Live.
Keynote Panel with Reddy Chagam, Principal Engineer and Chief SDS Architect, Intel
Wednesday April 20 at 9:25am
Data in the Cloud Keynote Panel: Cloudy with a chance of running out of disk space? Or Sunny times ahead?
As larger and larger datasets move to the cloud new challenges and opportunities emerge in handling such workloads. New technologies, revamped products, and a never ending stream of idea’s follow in the wake of this advance. These aim to improve the performance and manageability of cloud based data, but are they enough? What issues still need to be worked out? and Where are we going as an industry?
Accelerating Ceph for Database Workloads with an all PCIe SSD Cluster
Tuesday April 19, 3:50pm in Room 203
Reddy Chagam, Principal Engineer and Chief SDS Architect, Intel
Tushar Gohad, Senior Engineer, Intel
PCIe SSDs are becoming increasingly popular for deploying latency sensitive workloads such as database and big data n enterprise and service provider environments. Customers are exploring low latency workloads on Ceph using PCIe SSDs to meet their performance needs. In this session, we will look at a high IOPS, low latency workload deployment on Ceph, performance analysis on all PCIe configurations, best practices and recommendations.
Performance of Percona Server for MySQL on Intel Server Systems using HDD, SATA SSD, and NVMe SSD as Different Storage Mediums
Tuesday April 19, 11:30am in Room 203
Ken LeTourneau, Solutions Architect, Intel
This talk looks at the performance of Percona Server for MySQL for Linux running on the same Intel system but with three different storage configurations. We will look at and compare the performance of (1) a RAID of HDD vs. (2) a RAID of SATA SSD vs. (3) a RAID of NVMe SSD. In the talk we’ll cover the hardware and system configuration and then discuss results of TPC-C and TPC-H benchmarks. Then we’ll look at the overall system costs including hardware and software, and cost per transaction/query based on overall costs and benchmark results.
Increase MySQL Database Performance with Intel® SSD plus Cache Acceleration Software
Wednesday April 20 1pm in Room 203
David Tuhy, Senior Director of Business Development Intel, Non-Volatile Memory Solutions Group, Intel
The primary storage for many of today’s databases is usually arrays of rotational media which can typically quickly becomes the largest bottleneck for database application performance. Improving database application performance by replacing slower HDDs with faster solid-state drives (SSDs) and providing a method to cache frequently accessed files can greatly improve application performance and reduce support costs. This talk shows how Intel® SSD with Intel® CAS can increase database performance immediately, without a modification or change to the existing applications or storage media back-end.
New releases! Intel® Video Pro Analyzer, & Intel® Stress Bitstreams & Encoder
2016 R4 – Significantly improve encoders & decoders
Things just got easier for media/video solution… Read more
What’s New in Intel® Stress Bitstreams and Encoder 2016 R4
by Jeff McAllister, technical consulting engineer for Intel® Media Server Studio product family
Video is an ever-present part of our… Read more
The new generation of Intel® Compute Stick even more to offer with the Intel® Core™ M5 processor and an option with Intel® vPro™ technology. Plus, extra USB ports and faster wireless – the features you and your customers told us … Read more >
The post What could you do with Intel Compute Stick? Here are some ideas. appeared first on Technology Provider.
When you design for mobile analytics, the response time (sometimes also referred to as load time) will always come into question. The concept of fast loading pages is nothing new. Plenty of material exists on the web that covers everything from designing web sites to developing apps.
Every so often I am presented with this question: What is an acceptable response time for an mobile analytics asset? And my answer—“it depends”— generally causes confusion. This is often because the person asking the question is looking for a magic number of seconds that’s regularly quoted on the web by different surveys or studies.
But to provide consistent results can be more challenging than it may appear. There are several factors that will ultimately dictate our success. We can manage some of them through best practices. Others will be completely out of our control.
Let’s take a look at several pieces that may come into play.
Mobile analytics inherits many of the same challenges that exist with data-driven apps, which are at the mercy of the mobile networks that they run on. In a previous blog post, I referred to this network as the “digital bridge.” People frequently forget that this digital bridge can be made up of multiple networks and can lead to unreliable bandwidth.
In order to manage the challenges related to mobile networks, consider:
Typically, the response time in traditional mobile analytics implementations is made up of two main parts. First is the data load, which includes the download of results from the database query. Second is the page load, which includes the necessary elements to display on the page.
In order to manage the challenges related to both pieces, consider:
One of the reasons why I always say that “it depends” is because the concept of response time is dictated more by user expectations or perception and less by the actual time it takes for the page to load. The terms fast and slow are relative depending on your audience. What may be acceptable to one set of users may not be to another.
In order to manage the perception, study the target user audiences to match the solution with their expectations:
Unless you are taking advantage of new technologies, such as in-memory computing, you’ll need to successfully manage response time. Your users’ perceptions, which are largely influenced by their daily (and sometimes less complicated) experiences with their mobile devices, will have a direct impact on the success of your solution.
Stay tuned for my next blog in the Mobile Analytics Design series.
Consumer health was one of the big trends that came out of HIMSS 2016. Patients using wearable technology and smartphone apps to collect and send data to physicians is making a dramatic impact on how healthcare research is performed.
One area where this model is already moving forward is in Parkinson’s disease research. Patients battling this disease usually see their physicians every six to 12 months. By utilizing technology, patients can regularly collect data on their movements, send the information to the cloud for analysis, and be better prepared for their next appointment. This process provides more value for each interaction with the doctor and from what we see, the patients are excited to be able to contribute data and help researchers combat this disease.
In the above video clip, Chen Admati, advanced analytics manager at Intel, explains how consumer health platforms such as wearable technology are helping in Parkinson’s disease and shows how Intel is working to develop new algorithms to analyze important information. The hope is to take the value from this research model and translate it to other disease platforms to combat some of the most prevalent health challenges facing us today.
Watch the video and let us know what questions you have about wearables and consumer health.
In one of my posts, “Developers Need to Consider Relative Input For Intel® RealSense™ Technology,” I mentioned how there is a shift happening in computing. Computer control using Intel RealSense technology is best when it’s relative control versus absolute control. Another … Read more >
CSE 2016 Future of Cyber Security by Matthew Rosenquist from Matthew Rosenquist The security industry is changing. Technology innovation is eroding the distance between the roles and responsibilities of traditionally independent physical and cyber security teams. Modern physical security tools … Read more >
The security industry is changing. Technology innovation is eroding the distance between the roles and responsibilities of traditionally independent physical and cyber security teams. Modern physical security tools now rely heavily on networks, clouds, firmware, and software which puts them at risk of cyber exploitation. Computing devices, no matter how well managed, are largely vulnerable to physical attacks. The biggest convergence between these two worlds is coming from the rapid growth and adoption of Internet of Things (IoT), which extends access, control, and people-safety issues to users and businesses. Transportation, critical infrastructure, healthcare, and other industries currently rely on strong physical controls. More and more, they will also require the same benefits from cybersecurity to achieve the common goals of protecting people’s safety, property, and business assets. In the highly connected world of the near future, attacks against both physical and cyber targets will originate from far across the digital domain. Convergence is coming.
At this year’s 2016 ISC West conference, one of the largest security conferences with over 1000 exhibitors and brands, the organizers took an aggressive step which showed their insights to the future. The Connected Security Expo, a sub-conference with separate tracks, was established and began its inaugural year, bringing together for the first time both physical and cyber security professionals at ISC West. I was honored to deliver one of the two keynotes to a combined audience of security leaders who recognize the inevitable intersection of security.
Organizations must address the combined physical and cyber threats which they will face. Leaders require insights into the complex future of cybersecurity, both the challenging risks and equally lucrative opportunities, which will emerge as cyber-threats maneuver over time. In my presentation I discussed how cybersecurity is similar to its physical counterpart, as a difficult and serious endeavor, and strives to find a balance in managing the security of computing capabilities to protect the technology which connects and enriches the lives of everyone.
The 2016 Future of Cyber Security presentation showcased the cause-and-effect relationships, provided perspectives of the forthcoming challenges the industry is likely to face, and how aligned security can be better prepared to manage it. A number of other notable speakers, including Mike Howard, CSO for Microsoft, shared insights with the audience. Herb Kelsey, Chief Architect at Guardtime, and Nate Kube, Founder and CTO of Wurldtech a GE Company, also presented a keynote: “Reducing the Time to Detect Tamper – Physical Security’s Mission Against Cyber Threats”. They discussed the benefits and risks of the connected world, from power stations to light bulbs and everything in-between. The unintended consequences will include bad actors using technology against, instead of for us. The speakers partnered to showcase the future trends and technologies in securing the promise of the Internet of Things.
I look forward to next years the Connected Security Expo as the audience ranks will continue to grow. Speaker topics, threats, and the synthesis of technology will be even stronger. I expect other conferences to start down the same path, in an attempt to catch up with ISC West. It makes sense as the convergence between physical and cyber security will continue to gain momentum.
By: Chad Arimura, Iron.io CEO and Co-founder
You may have heard the term serverless computing being tossed around recently. This doesn’t mean we’re getting rid of the data center in any form or fashion, it simply means that we’re entering a world where developers never have to think about provisioning or managing infrastructure resources to run distributed applications at scale. This is done by decoupling backend jobs as independent micro services that run through an automated workflow when a predetermined event occurs. For the developer, it’s a serverless experience.
There have been two separate, but related innovations that enable this software defined future. One at the application layer where APIs and developer tools abstract away the complex configuration and operations required to distribute and execute workloads at massive scale. The other at the infrastructure layer, where workloads are profiled for their most optimal hardware conditions. These innovations narrows the gap between developer and chip, leading to more intelligent workload-aware systems.
At Iron.io, we have been leading this serverless computing movement through our container-based job processing platform. Through an event-driven model, developers simply package their job code as a lightweight Docker image and set when they want it to run; on a regular schedule, from a web hook, when a sensor goes off, or from a direct API call. When the event triggers, an automated workflow kicks off behind the scenes to spin up a container and execute the job. All that’s needed is available infrastructure resources to distribute the workloads.
This convergence between the application layer and the infrastructure layer is where Intel and Iron.io intersect. For example, an encryption workload is best run on an Intel platform that uses CPUs that have Intel® AES-NI instructions, but it shouldn’t have to be up to the developer to make that call. The Snap Framework collects the telemetry data that tells Iron.io where best to deliver the job. This is done by including a Snap plugin within the Iron.io runtime that captures the memory, CPU, and block I/O environment for each container process. In addition, Snap can capture advanced characteristics using Intel® RDT (Cache Monitoring Technology and Cache Allocation Technology). The data is then analyzed and published back to Iron.io so the next time the job is triggered to run, it can be routed to the right Intel processor with the right resource allocation.
This collaboration between Intel and Iron.io represents the future of workload-aware computing environments. When dealing with web scale and beyond, incremental optimizations led by software defined infrastructure can make the difference between success and failure. We’re excited to collaborate with Intel to empower the modern Enterprise with a serverless future.
In order to prove to a client that a change of their indexed collection of complex numbers from an array of structures (with a real part and an imaginary part) to a structure of arrays, I created a… Read more
On October 5, 2015, Intel Education Service Corps (IESC) sent a group of Intel employee volunteers to a school in Kigali, the capital of Rwanda, on a mission to change the lives of students and teachers using education and Intel … Read more >
The post Intel Employees Enable Digital Classrooms in Rwanda appeared first on Jobs@Intel Blog.