DRIVER VERSION: 22.214.171.124.4380 & 126.96.36.199.4380
DATE: February 5, 2016
This driver is in zip format intended for developers and IT professionals.
32bit – win32_154018.4380.zip
64bit -… Read more
DRIVER VERSION: 188.8.131.52.4380 & 184.108.40.206.4380
DATE: February 5, 2016
This driver is in zip format intended for developers and IT professionals.
32bit – win32_154018.4380.zip
64bit -… Read more
DistribuTECH 2016 is off to a strong start with lots of activity on the show floor, particularly around mobility and collaboration tools. Workforce transformation is a key business driver in the energy industry, given that 50 percent of current field … Read more >
The post Thoughts from DistribuTECH Day 1: Mobility and Collaboration appeared first on Grid Insights by Intel.
Please join IXPUG (Intel® Xeon Phi™ Users Group) for a meeting at the Clarion Congress Hotel, hosted by IT4Innovations, VŠB – Technical University of Ostrava in Ostrava, Czech Republic. The meeting… Read more
Unlike software components operating within an enterprise, the Web services model establishes a loosely coupled relationship between a service producer and a service consumer. Service consumers have little control over services that they employ within their applications. A service is … Read more >
The post Web Services-based Development: Challenges and Opportunities appeared first on Intel Software and Services.
It’s seems like just yesterday that we were leaving Chicago and basking in the innovation on display at HIMSS 15. Actually, since the last show was in April and now the biggest event in healthcare technology is back in its usual calendar slot, we’re ready for the second HIMSS in less than 12 months.
This year, the healthcare technology community is headed to Las Vegas February 29 to March 3, 2016, to see what innovation will be on the healthcare horizon in 2016 and beyond. You should expect to see more conversations around how the patient, and their user generated data, plays into healthcare going forward.
At Intel, we’re approaching HIMSS 16 with a critical eye on three areas that we feel are focal points for CMIOs: precision medicine, health IT and medical devices, and consumer health. All are patient-focused.
To learn more about these pillars, you’re invited to the Intel booth (#3032) to view the latest technology platforms that focus on the rise of patient engagement and consumer generated health data. We encourage you to stop by and take a guided tour, where you’ll see these demonstrations:
Outside of the Intel booth, you will find our technology in a number of HIMSS Kiosks that showcase real solutions available today:
Finally, be sure to follow @IntelHealth on Twitter to keep up-to-date on all the happenings going on at the event. We’ll be live tweeting from the show floor and sharing pictures of new health IT products/services that we discover. We’ll also be giving away a Basis Peak watch every day during HIMSS through a Twitter contest so be on the lookout for how you can win.
HIMSS is always a great event and we are looking forward to seeing you in Las Vegas.
What are you most looking forward to seeing at HIMSS16? Tweet us @IntelHealth.
Based on reports in recent news, some forms of insider threat get a lot of attention. Just about everyone has heard of examples of damage caused by a disgruntled employee, workplace violence, or theft of intellectual property. But insider threat is actually much larger than those common examples. At Intel, we’ve been studying this situation and have documented our findings in a white paper we call the Insider Threat Field Guide. In this field guide, we discuss 13 distinct insider threat agent types and the insider events they are most likely to cause, providing a comprehensive approach to identifying the most likely insider threat vectors. We are sharing this guide so other companies can improve their security stance too.
For example, one threat agent type we identified is the “outward sympathizer.” Our identification of this character is unique in the industry—we were unable to find any published analysis of this type of insider threat. We define an outward sympathizer as a person who knowingly misuses the enterprise’s systems to attack others in support of a cause external to the enterprise.
As we developed the field guide, we characterized the outward sympathizer threat as follows:
The outward sympathizer is a complex threat agent and triggering events can vary widely. Perhaps there is conflict in a country in which family resides, or an environmental issue that the insider feels strongly about. It can be difficult to predict what will trigger an outward sympathizer attack because the reason for the attack may be entirely unique to the sympathizer and not obvious to others.
Outward sympathizer activity can occur at three escalating levels. Even the most benign level could potentially have devastating consequences for the enterprise.
Enterprises should include outward sympathizers in their own insider threat models and plan for mitigation. Because this type of threat agent presents differently than most other characters, particularly at the benign level, it can be hard to detect—in fact, some of their methods may not be traceable back to the individual. The unique aspects of the outward sympathizer are motivation and timing, so the most effective mitigations will target those.
Research by CERT and others suggests that strong tone-from-the-top security messaging is an effective behavioral deterrent, especially for non-professional threat actors. In addition, we use the following techniques to help minimize the likelihood of outward sympathizer events:
The technical methods used by outward sympathizers are not unique (as a class) and follow classic attack patterns. Technical controls are environmental, not specific. In particular, although it is common to monitor networks for incoming attacks, it is less common to monitor for outgoing attacks. Other effective technical controls include the following:
Intel IT’s Insider Threat Field Guide—including our understanding of the outward sympathizer threat agent—is an innovative way of looking at the full scope of insider threats. I believe other security professionals can use the field guide to identify and prioritize insider threats, communicate the risk of these threats, and optimize the use of information security resources to develop an effective defense strategy. I encourage you to share your feedback on the field guide by leaving a comment below. In addition, if you are looking for more information about our other security solutions, check out the 2015-2016 Intel IT Annual Performance Report. We hope you will join the conversation!
As the first release of the Intel® RealSense™ SDK (Windows) in 2016, R1 (aka version 220.127.116.1128) focuses on improvements for the Intel® RealSense™ SR300 camera, introduction of Platform Camera… Read more
(Cross-posted from my blog on http://evangelists.intel.com)
If you are a fan of PHP for developing web applications (as I am), you could feel the world shift a bit as it was announced that… Read more
If you are a fan of PHP for developing web applications (as I am), you could feel the world shift a bit as it was announced that WordPress was switching from PHP to Node.js. Don’t think of WordPress as a blog … Read more >
1. An eCryptfs*-Based Solution for Securing Your Data on Android*
The threat to data on mobile devices is a serious issue. Not only have Android* developers worked on security, but many application… Read more
Here is another success story of how Multi-OS Engine Technology Preview helped one of our customer firms, Auriga which has been developing innovative solutions for its clients for 25 years in the… Read more
Check out this joint Intel & Cloudera blog to get an update on the progress of the effort to bring erasure coding to HDFS, including a report about fresh performance benchmark testing results. Read more
A rise in the use of mobile devices and applications has heightened the demand for organizations to elevate their plans to deliver mobile analytics solutions. However, designing mobile analytics solutions without understanding your audience and purpose can sometimes backfire.
I frequently discover that in mobile analytics projects, understanding the purpose is where we take things for granted and fall short—not because we don’t have the right resources to understand it better, but because we tend to form the wrong assumptions. Better understanding of the “mobile purpose” is critical for success and we need to go beyond just accepting the initial request at the onset of our engagements.
The Merriam-Webster dictionary defines the purpose as “the reason why something is done or used: the aim or intention of something.” Although the reasons for a mobile analytics projects may appear obvious on the surface, a re-evaluation of the initial assumptions can often prove to be invaluable both for the design and longevity of mobile projects.
Here are a few points to keep in mind before you schedule your first meeting or lay down a single line of code.
I often talk about the importance of executive sponsorship. There’s no better person than the executive sponsor to provide guidance and validation. When it comes to technology projects (and mobile analytics is no different), our engagements need to be linked directly to our strategy. We must make sure that everything we do contributes to our overall business goal.
Is it relevant? It’s a simple question, yet we have a tendency to take it for granted and overlook its significance. It doesn’t matter whether we’re designing a strategy for mobile analytics or a simple mobile report—relevance matters.
Moreover, it isn’t enough just to study its current application. We need to ask: Will it be relevant by the time we deliver? Even with rapid deployment solutions and the use of agile project methodologies, there’s a risk that certain requirements may become irrelevant if current business processes that mobile analytics depends on change or your mobile analytics solution highlights gaps that may require a redesign of your business processes. In the end, what we do must be relevant both now and when we Go Live.
Understanding the context is crucial, because everything we do and design will be interpreted according to the context in which the mobile analytics project is managed or the mobile solutions are delivered. When we talk about context in mobile analytics, we mustn’t think only about the data consumed on the mobile device, but also how that data is consumed and why it was required in the first place.
We’re also interested in going beyond the what to further examine the why and how. Why is this data or report relevant? How can I make it more relevant?
Finding these answers requires that you get closer to current or potential customers (mobile users) by involving them actively in the process from day one. You need to closely observe their mobile interactions so you can validate your assumptions about the use cases and effectively identify gaps where they may exist.
Ultimately, it all boils down to this: What is the business value?
Is it insight into operations so we can improve productivity? Is it cost savings through early detection and preventive actions? Is it increased sales as a result of identifying new opportunities?
What we design and how we design will directly guide and influence many of these outcomes. If we have confirmed the link to strategy, considered the relevance, and understood the context, then we have all the right ingredients to effectively deliver business value.
In the absence of these pieces, our value proposition won’t pass muster.
Stay tuned for my next blog in the Mobile Analytics Design series.
I was delighted to be invited to speak at Microsoft’s Empowering Health event in Brussels, Belgium recently, which brought together some 200 thought-leaders from across the world to discuss health IT issues in a ‘Mobile First and Cloud First World’.
I was looking forward to hearing about how some of the more progressive countries in Europe were utilising technology to deliver more personal, productive and predictive health to its citizens so it was pleasing to hear examples from the Netherlands around patient portals and from Sweden where virtual care rooms are helping to deliver a more efficient healthcare system through patient self-diagnosis. From these very real examples of today to discussions around the future of machine learning and robotics, the narratives were underpinned by the absolute need for clinical staff to have input into the technology solution they would be asked to use as early as possible.
Some great statistics from Tom Lawry, Director of Worldwide Health Analytics, Microsoft, generated a real buzz in the room. Tom started his presentation by stating that ‘we spend a lot of money ONCE people are sick, while most money is spent on small numbers of people who are VERY sick.’ Clearly there are a lot of areas where technology is helping to move the needle from cure to prevention while all-in-one-day genome sequencing to personalised medicine is something we are working towards here at Intel as we look ahead to 2020. I was interested to hear examples from across the world on how healthcare providers are dealing with increasingly large amounts of data. Within the European Union there are very different takes on what data is classed as secure and what is not. For providers and vendors, this requires a keen eye on the latest legislation, but it’s clear that it’s a case of one size does not necessarily fit all.
The breakout nursing session brought together a dedicated group of nurses with a real interest in how technology can, and will, help make nursing even better. We kicked off by discussing what level of digital education nurses have today, and what they need to equip them for the future. The consensus was that more needs to be done in helping nurses be prepared for the technology they’ll be asked to use, in essence making technology a core part of the nursing curriculum from day one.
The move towards distributed care generated some fantastic thoughts on how technology can help nurses working in the community – read my recent blog for more thoughts on that. We all agreed that access to healthcare is changing, it has to if we are to meet the demands of an ageing population. For example, millennials don’t necessarily think that they need to see a medical practitioner in a hospital setting or a doctor’s surgery, they are happy to call a clinician on the phone or sit in a kiosk for a virtual consultation, the priority being quick and easy access.
I was particularly impressed by a new app showcased by Odense University Hospital called Talk2Care – in short, it enables patients in ICU to ‘talk’ to nurses using an icon-based dashboard on a mobile device. This new way for patients to communicate, who would in some cases only be able to nod or shake their head, has been invaluable not only for nurses but the patient’s family too. What really pleased me was that nurses were actively championing this technology, encouraging patients to utilise it to help nurses deliver a better care experience.
We closed with thoughts on how taking care into the community was being revolutionized by technology. We’ve got some great examples of the role Intel is playing in the advance towards more distributed care, from the use of Intel IoT Gateways to help the elderly live more independent lives at home through to the KU Wellness car which empowers nurses to take advanced care into the community using mobile devices.
After a short break we returned to the main auditorium where I was pleased to be on stage with nurses from across the world. The future of the workforce was discussed in some detail, particularly around how the nursing and the wider healthcare community will manage the anticipated future global shortage of nurses. Technology will go some way to alleviating this shortfall through improved workflows but I like to think in a more visionary way, perhaps we will see the use of avatars, virtual reality and (thinking of discussions earlier in the day) robots. What’s clear is that nursing is changing in response to the move to distributed care, we need to skill not only nurses but other caregivers too, i.e. families, to make better use of the technology that is available today and tomorrow.
Even after nearly 25 years, I continue to be excited and passionate about security. I enjoy discussing my experiences, opinions, and crazy ideas with the community. I often respond to questions and comments on my blogs and in LinkedIn, as … Read more >
The post Advice to a Network Admin Seeking a Career in Cybersecurity appeared first on Intel Software and Services.
Even after nearly 25 years, I continue to be excited and passionate about security. I enjoy discussing my experiences, opinions, and crazy ideas with the community. I often respond to questions and comments on my blogs and in LinkedIn, as it is a great platform to share ideas and communicate with others in the industry. Recently I had responded to a Network Admin seeking a career in cybersecurity. With their permission, I thought I would share a bit of the discussion as it might be helpful to others.
Mr. Rosenquist – I have been in the Information Technology field as a network administrator for some 16 years and am looking to get into the Cyber Security field but the opportunity for someone that lacks experience in this specialized field is quite difficult. I too recognize the importance of education and believe it is critical to optimum performance in your field. What would your recommendation of suggested potential solutions be to break into this field? Thank you for your time and expertise.
Glad to hear you want to join the ranks of cybersecurity professionals! The industry needs people like you. You have a number of things going for you. The market is hungry for talent and network administration is a great background for several areas of cybersecurity.
Depending on what you want to do, you can travel down several different paths. If you want to stay in the networking aspects, I would recommend either a certification from SANS (or other reputable training organization with recognizable certifications) or dive into becoming a certified expert for a particular firewall/gateway/VPN product (ex. PaloAlto, CISCO, Check Point, Intel/McAfee, etc.). The former will give you the necessary network security credentials to work on architecture, configuration, analysis, operations, policy generation, audit, and incident response. The latter are in very high demand and specialize in the deployment, configuration, operation, and maintenance of these specific products. If you want to throw caution to the wind and explore areas outside of your networking experience, you can go for a university degree and/or security credentials. Both is better but may not be necessary.
I recommend you work backwards. Find job postings for your ‘dream job’ and see what the requirements are. Make inquiries about preferred background and experience. This should give you the insights to how best fill your academic foundation. Hope this helps. – Matthew Rosenquist
The cybersecurity industry is in tremendous need of more people with greater diversity to fill the growing number of open positions. Recent college graduates, new to the workforce, will play a role in satiating the need, but there remain significant opportunities across a wide range of roles. Experienced professionals with a technical, investigative, audit, program management, military, and analysis background can pivot into the cybersecurity domain with reasonable effort. This can be a great prospect for people who are seeking new challenges, very competitive compensation, and excellent growth paths. The world needs people from a wide range of backgrounds, experiences, and skills to be a part of the next generation of cybersecurity professionals.
An open question to my peers; what advice would you give to workers in adjacent fields who are interested in the opportunities of cybersecurity?
Today in Auckland, New Zealand, U.S. Trade Representative Michael Froman will take a critical step to advancing U.S. economic and innovation leadership around the world, while breaking down trade barriers with some of the fastest growing markets for U.S. businesses. … Read more >
The post Intel Commends the Signing of Trans-Pacific Partnership appeared first on Policy@Intel.
Here’s a prediction for 2016: The year ahead will bring the increasing “cloudification” of enterprise storage. And so will the years that follow—because cloud storage models offer the best hope for the enterprise to deal with unbounded data growth in a cost-effective manner.
In the context of storage, cloudification refers to the disaggregation of applications from the underlying storage infrastructure. Storage arrays that previously operated as silos dedicated to particular applications are treated as a single pool of virtualized storage that can be allocated to any application, anywhere, at any time, all in a cloud-like manner. Basically, cloudification takes today’s storage silos and turns them on their sides.
There are many benefits to this new approach that pools storage resources. In lots of ways, those benefits are similar to the benefits delivered by pools of virtualized servers and virtualized networking resources. For starters, cloudification of storage enables greater IT agility and easier management, because storage resources can now be allocated and managed via a central console. This eliminates the need to coordinate the work of teams of people to configure storage systems in order to deploy or scale an application. What used to take days or weeks can now be done in minutes.
And then there are the all-important financial benefits. A cloud approach to storage can greatly increase the utilization of the underlying storage arrays. And then there are the all-important financial benefits. A cloud approach to storage can greatly increase the utilization of the storage infrastructure; deferring capital outlays and reducing operational costs.
This increased utilization becomes all the more important with ongoing data growth. The old model of continually adding storage arrays to keep pace with data growth and new data retention requirements is no longer sustainable. The costs are simply too high for all those new storage arrays and the data center floor space that they consume. We now have to do more to reclaim the value of the resources we already have in place.
Cloudification isn’t a new concept, of course. The giants of the cloud world—such as Google, Facebook, and Amazon Web Services—have taken this approach from their earliest days. It is one of their keys to delivering high-performance data services at a huge scale and a relatively low cost. What is new is the introduction of cloud storage in enterprise environments. As I noted in my blog on non-volatile memory technologies, today’s cloud service providers are, in effect, showing enterprises the path to more efficient data centers and increased IT agility.
Many vendors are stepping up to help enterprises make the move to on-premises cloud-style storage. Embodiments of the cloudification concept include Google’s GFS and its successor Colossus, Facebook’s HDFS, Microsoft’s Windows Azure Storage (WAS), Red Hat’s Ceph/Rados (and GlusterFS), Nutanix’s Distributed File System (NDFS), among many others.
The Technical View
At this point, I will walk through the architecture of a cloud storage environment, for the benefit of those who want the more technical view.
Regardless of the scale or vendor, most of the implementations share the same storage system architecture. That architecture has three main components: a name service, a two-tiered storage service, and a replicated log service. The architectural drill-down looks like this:
The “name service” is a directory of all the volume instances currently being managed. Volumes are logical data containers, each with a unique name—in other words a namespace of named-objects. A user of storage services attaches to their volume via a directory lookup that resolves the name to the actual data container.
This data container actually resides in a two-tier storage service. The frontend tier is optimized for memory. All requests submitted by end-users are handled by this tier: metadata lookups as well as servicing read requests out of cache and appending write operations to the log.
The backend tier of the storage service provides a device-based, stable store. The tier is composed of a set of device pools, each pool providing a different class of service. Simplistically, one can imagine this backend tier supporting two device pools. One pool provides high performance but has a relatively small amount of capacity. The second pool provides reduced performance but a huge amount of capacity.
Finally, it is important to tease out the frontend tier’s log facility as a distinct, 3rd component. This is because this facility key to being able to support performant write requests while satisfying data availability and durability requirements.
In the weeks ahead, I will take up additional aspects of the cloudification of storage. In the meantime, you can learn about things Intel is doing to enable this new approach to storage at intel.com/storage.
By Alice Borrelli, Global Healthcare Director Tomorrow, Senate and House champions for remote care will take an important step toward making the lives of chronic disease patients better through the introduction of the CONNECT for Health legislation. Working with providers … Read more >
The post Intel Supports CONNECT for Health Making Chronic Disease Patients’ Lives Better appeared first on Policy@Intel.
While every facet of data center management is changing at a rapid pace, operating budgets rarely keep up. Data volume doubles every 18 months and applications every two years; in contrast, operating budgets take eight years to double (IDC Directions, 2014).
IT has always been asked to do more with less, but the dynamic nature of the data center has been accelerating in recent years. Smart devices, big data, virtualization, and the cloud continue to change service delivery models and elevate the importance of flexibility, elasticity, and scalability.
Every facet of data center management, as a result, has been complicated by an incredibly rapid rate of change. Thousands of devices move on and off intranets. Fluid pools of compute resources are automatically allocated. Does this ultra-dynamic environment make it impossible for IT and facilities management teams to identify under-utilized and over-stressed resources?
If so, energy consumption in the data center will continue to skyrocket. And data centers already consume 10 percent of all energy produced around the globe, according to recent Natural Resources Defense Council reports.)
Fortunately, IT is far from powerless even within these challenging data center conditions.
Discovering some secret weapons
Ironically, in today’s data centers consisting of software-defined resources, the secret weapon for curbing energy costs lies in the hardware. Rack and blade servers, switches, power distribution units, and many other data center devices provide a wealth of power and temperature information during operation. Data center scale and the diversity of the hardware make it too cumbersome to manually collect and apply this information, which has led to a growing ecosystem of energy management solution providers.
Data center managers, as a result, have many choices today. They can take advantage of a management console that integrates energy management, have an integrator add energy management middleware to an existing management console, or independently deploy an energy management middleware solution to gain the necessary capabilities.
Regardless of the deployment option, a holistic energy management solution allows IT and facilities teams to view, log, and analyze energy and temperature behaviors throughout the data center. Automatically collected and aggregated power and thermal data can drive graphical maps of each room in a data center, and data can be analyzed to identify trends and understand workloads and other variables.
Visibility and the ability to log energy information equips data center managers to answer basic questions about consumption, and make better decisions relating to data center planning and optimization efforts.
Best-in-class energy management solutions take optimization to a higher level by combining automated monitoring and logging with real-time control capabilities. For example, thresholds can be set to cap power for certain servers or racks at appropriate times or when conditions warrant. Servers that are idle for longer than a specified time can be put into power-conserving sleep modes. Power can be allocated based on business priorities, or to extend the life of back-up power during an outage. Server clock rates can even be adjusted dynamically to lower power consumption without negatively impacting service levels or application performance.
Energy-conscious data centers take advantage of these capabilities to meet a broad range of operating objectives including accurate capacity planning, operating cost reduction, extending the life of data center equipment, and compliance with “green” initiatives.
Common uses and proven results
Customer deployments highlight several common motivations, and provide insights in terms of the types and scale of results that can be achieved with a holistic energy management solution and associated best practices.
Power monitoring. Identifying and understanding peak periods of power use motivate many companies to introduce an energy management solution. The insights gained have allowed customers to reduce usage by more than 15 percent during peak hours, and to reduce monthly data center utility bills even as demands for power during peak periods goes up. Power monitoring is also being applied to accurately charge co-location and other service users.
Increasing rack densities. Floor space is another limiting factor for scaling up many data centers. Without real-time information, static provisioning has traditionally relied upon power supply ratings or derated levels based on lab measurements. Real-time power monitoring typically proves that the actual power draw comes in much lower. With the addition of monitoring and power capping, data centers can more aggressively provision racks and drive up densities by 60 to more than 80 percent within the same power envelope.
Identifying idle or under-used servers. “Ghost” servers draw as much as half of the power used during peak workloads. Energy management solutions have shown that 10 to 15 percent of servers fall into this category at any point in time, and help data center managers better consolidate and virtualize to avoid this wasted energy and space.
Early identification of potential failures. Besides monitoring and automatically generating alerts for dangerous thermal hot spots, power monitoring and controls can extend UPS uptime by up to 15 percent and prolong business continuity by up to 25 percent during power outages.
Advanced thermal control. Real-time thermal data collection can drive intuitive heat maps of the data center without adding expensive thermal sensors. Thermal maps can be used to dramatically improve oversight and fine-grained monitoring (from floor level to device level). The maps also improve capacity planning, and help avoid under- and over-cooling. With the improved visibility and threshold setting, data center managers can also confidently increase ambient operating temperatures. Every one-degree increase translates to 5 to 10 percent savings in cooling costs.
Balancing power and performance. Trading off raw processor speed for smarter processor design has allowed data centers to decrease power by 15 to 25 percent with little or no impact on performance.
Time to get serious about power
Bottom line, data center hardware still matters. The constantly evolving software approaches for mapping resources to applications and services calls for real-time, fine-grained monitoring of the hardware. Energy management solutions make it possible to introduce this monitoring, along with power and thermal knobs that put IT and facilities in control of energy resources that already account for the largest line item on the operating budget.
Software and middleware solutions that allow data center managers to keep their eyes on the hardware and the environmental conditions let automation move ahead full speed, safely, and affordably – without skyrocketing utility bills. Power-aware VM migration and job scheduling should be the standard practice in today’s power-hungry data centers.