Topic: Apache Spark* and the Trusted Analytics Platform (TAP) – Join this discussion with the engineering team behind integrating Spark capabilities into TAP.
When: March 3, 2015 @ 08:00 – 09:00… Read more
Topic: Apache Spark* and the Trusted Analytics Platform (TAP) – Join this discussion with the engineering team behind integrating Spark capabilities into TAP.
When: March 3, 2015 @ 08:00 – 09:00… Read more
When you look around the DistribuTECH show floor, there’s one area that you can see is clearly growing in the energy industry: the Internet of Things (IoT). Companies are offering IoT solutions for nearly every aspect of the power grid. … Read more >
The post Thoughts from DistribuTECH Day 2: Situational Awareness and IoT appeared first on Grid Insights by Intel.
A CIO’s official title doesn’t include “Person Keeping the Computers Running.” Yes, you help ensure that happens, but according to CIO.com’s 2015 State of the CIO, an alarming 13 percent of IT professionals don’t see CIOs as industry leaders, despite the fact that 44 percent of those CIOs report directly to the CEO.
In previous blogs, I’ve discussed how CIOs can use social media to continue learning about the industry and raise awareness of the CIO as an influential business leader. But how can a CIO grow their social media presence to become more than the face of their business’s tech? By taking advantage of social media’s wealth of knowledge to truly innovate for their company and share successes to inspire industry change.
One way CIOs can improve the face of social IT is by collaborating and engaging with some of the most influential CIOs on social media. This sort of teamwork adds value and vision to the industry and champions adoption of new technologies.
If you’ve been experimenting on social media lately by retweeting IT influencers or reposting articles on LinkedIn, it’s time to up your game and actually engage. Challenge insights, compare notes, and share stories about trying new tech or processes others have adopted. Here are a few ways to improve your engagement with industry leaders:
By increasing your social media engagement, you’ll foster new relationships. Discussing topics important to your company and possible solutions not only raises IT awareness, it also puts your brand at the forefront of the conversation.
Your everlasting role as a social CIO is to continue learning. If you’re learning from social media, you should also be applying what you learn. A social CIO knows how to rise from reactive facilitator to proactive leader.
A reactive facilitator sources solutions that might already be in motion in their own IT group or social connections. That’s a step in the right direction toward staying current and relevant in the tech conversation. But a proactive leader stays one step ahead of the game by keeping an eye out for needs and seeking (or creating) solutions. As you become more knowledgeable, you’ll notice trends and be actively in tune with where the industry is headed.
I’m constantly impressed by CIO collaboration on social media. It’s becoming less about marketing and more about improving the industry itself with true innovation. There’s too much opacity today among enterprises. When IT decision makers begin to show a little bit of transparency and share solutions, they improve global technology for the better.
Now that you know how to connect and engage with the IT greats, get out there and collaborate with others to improve your own business and solutions. As you gain more notoriety on social media, you’ll have people coming to you with questions and suggestions of their own — which is a great segue to my next blog post on using social media for talent acquisition. Stay tuned …
Sepsis is one of the leading causes of hospital readmissions and death in the United States, impacting some 750,000 patients per year at a cost of $16.7 billion annually to the healthcare system. Reducing the impact of sepsis cases even slightly would significantly enhance patient outcomes and reduce unnecessary expenses.
While the understanding and treatment of sepsis is improving, early detection and diagnosis of the condition continues to be an issue. In the above video, see how Cerner developed a solution to the sepsis challenge – the St. John Sepsis Agent, which uses Intel technology and to date has helped save more than 2,700 lives by identifying sepsis in the early stages. According to Cerner, organizations can achieve $5,882 in medical savings per treated patient, a 21 percent reduction in length of stay, and a 24 percent reduction in in-hospital patient mortality rates by implementing the St. John’s Sepsis Agent.
Also in the case study video, see how Cerner aggregates big data and utilizes analytics to enable population health, and how Intel and Cloudera allow Cerner to provide a technology platform to support massive amounts of storage capacity, scalable parallel processing with near real-time alerts, as well as high levels of security.
DRIVER VERSION: 220.127.116.11.4380 & 18.104.22.168.4380
DATE: February 5, 2016
This driver is in zip format intended for developers and IT professionals.
32bit – win32_154018.4380.zip
64bit -… Read more
DistribuTECH 2016 is off to a strong start with lots of activity on the show floor, particularly around mobility and collaboration tools. Workforce transformation is a key business driver in the energy industry, given that 50 percent of current field … Read more >
The post Thoughts from DistribuTECH Day 1: Mobility and Collaboration appeared first on Grid Insights by Intel.
Please join IXPUG (Intel® Xeon Phi™ Users Group) for a meeting at the Clarion Congress Hotel, hosted by IT4Innovations, VŠB – Technical University of Ostrava in Ostrava, Czech Republic. The meeting… Read more
Unlike software components operating within an enterprise, the Web services model establishes a loosely coupled relationship between a service producer and a service consumer. Service consumers have little control over services that they employ within their applications. A service is … Read more >
The post Web Services-based Development: Challenges and Opportunities appeared first on Intel Software and Services.
Based on reports in recent news, some forms of insider threat get a lot of attention. Just about everyone has heard of examples of damage caused by a disgruntled employee, workplace violence, or theft of intellectual property. But insider threat is actually much larger than those common examples. At Intel, we’ve been studying this situation and have documented our findings in a white paper we call the Insider Threat Field Guide. In this field guide, we discuss 13 distinct insider threat agent types and the insider events they are most likely to cause, providing a comprehensive approach to identifying the most likely insider threat vectors. We are sharing this guide so other companies can improve their security stance too.
For example, one threat agent type we identified is the “outward sympathizer.” Our identification of this character is unique in the industry—we were unable to find any published analysis of this type of insider threat. We define an outward sympathizer as a person who knowingly misuses the enterprise’s systems to attack others in support of a cause external to the enterprise.
As we developed the field guide, we characterized the outward sympathizer threat as follows:
The outward sympathizer is a complex threat agent and triggering events can vary widely. Perhaps there is conflict in a country in which family resides, or an environmental issue that the insider feels strongly about. It can be difficult to predict what will trigger an outward sympathizer attack because the reason for the attack may be entirely unique to the sympathizer and not obvious to others.
Outward sympathizer activity can occur at three escalating levels. Even the most benign level could potentially have devastating consequences for the enterprise.
Enterprises should include outward sympathizers in their own insider threat models and plan for mitigation. Because this type of threat agent presents differently than most other characters, particularly at the benign level, it can be hard to detect—in fact, some of their methods may not be traceable back to the individual. The unique aspects of the outward sympathizer are motivation and timing, so the most effective mitigations will target those.
Research by CERT and others suggests that strong tone-from-the-top security messaging is an effective behavioral deterrent, especially for non-professional threat actors. In addition, we use the following techniques to help minimize the likelihood of outward sympathizer events:
The technical methods used by outward sympathizers are not unique (as a class) and follow classic attack patterns. Technical controls are environmental, not specific. In particular, although it is common to monitor networks for incoming attacks, it is less common to monitor for outgoing attacks. Other effective technical controls include the following:
Intel IT’s Insider Threat Field Guide—including our understanding of the outward sympathizer threat agent—is an innovative way of looking at the full scope of insider threats. I believe other security professionals can use the field guide to identify and prioritize insider threats, communicate the risk of these threats, and optimize the use of information security resources to develop an effective defense strategy. I encourage you to share your feedback on the field guide by leaving a comment below. In addition, if you are looking for more information about our other security solutions, check out the 2015-2016 Intel IT Annual Performance Report. We hope you will join the conversation!
As the first release of the Intel® RealSense™ SDK (Windows) in 2016, R1 (aka version 22.214.171.12428) focuses on improvements for the Intel® RealSense™ SR300 camera, introduction of Platform Camera… Read more
(Cross-posted from my blog on http://evangelists.intel.com)
If you are a fan of PHP for developing web applications (as I am), you could feel the world shift a bit as it was announced that… Read more
If you are a fan of PHP for developing web applications (as I am), you could feel the world shift a bit as it was announced that WordPress was switching from PHP to Node.js. Don’t think of WordPress as a blog … Read more >
1. An eCryptfs*-Based Solution for Securing Your Data on Android*
The threat to data on mobile devices is a serious issue. Not only have Android* developers worked on security, but many application… Read more
Here is another success story of how Multi-OS Engine Technology Preview helped one of our customer firms, Auriga which has been developing innovative solutions for its clients for 25 years in the… Read more
Check out this joint Intel & Cloudera blog to get an update on the progress of the effort to bring erasure coding to HDFS, including a report about fresh performance benchmark testing results. Read more
A rise in the use of mobile devices and applications has heightened the demand for organizations to elevate their plans to deliver mobile analytics solutions. However, designing mobile analytics solutions without understanding your audience and purpose can sometimes backfire.
I frequently discover that in mobile analytics projects, understanding the purpose is where we take things for granted and fall short—not because we don’t have the right resources to understand it better, but because we tend to form the wrong assumptions. Better understanding of the “mobile purpose” is critical for success and we need to go beyond just accepting the initial request at the onset of our engagements.
The Merriam-Webster dictionary defines the purpose as “the reason why something is done or used: the aim or intention of something.” Although the reasons for a mobile analytics projects may appear obvious on the surface, a re-evaluation of the initial assumptions can often prove to be invaluable both for the design and longevity of mobile projects.
Here are a few points to keep in mind before you schedule your first meeting or lay down a single line of code.
I often talk about the importance of executive sponsorship. There’s no better person than the executive sponsor to provide guidance and validation. When it comes to technology projects (and mobile analytics is no different), our engagements need to be linked directly to our strategy. We must make sure that everything we do contributes to our overall business goal.
Is it relevant? It’s a simple question, yet we have a tendency to take it for granted and overlook its significance. It doesn’t matter whether we’re designing a strategy for mobile analytics or a simple mobile report—relevance matters.
Moreover, it isn’t enough just to study its current application. We need to ask: Will it be relevant by the time we deliver? Even with rapid deployment solutions and the use of agile project methodologies, there’s a risk that certain requirements may become irrelevant if current business processes that mobile analytics depends on change or your mobile analytics solution highlights gaps that may require a redesign of your business processes. In the end, what we do must be relevant both now and when we Go Live.
Understanding the context is crucial, because everything we do and design will be interpreted according to the context in which the mobile analytics project is managed or the mobile solutions are delivered. When we talk about context in mobile analytics, we mustn’t think only about the data consumed on the mobile device, but also how that data is consumed and why it was required in the first place.
We’re also interested in going beyond the what to further examine the why and how. Why is this data or report relevant? How can I make it more relevant?
Finding these answers requires that you get closer to current or potential customers (mobile users) by involving them actively in the process from day one. You need to closely observe their mobile interactions so you can validate your assumptions about the use cases and effectively identify gaps where they may exist.
Ultimately, it all boils down to this: What is the business value?
Is it insight into operations so we can improve productivity? Is it cost savings through early detection and preventive actions? Is it increased sales as a result of identifying new opportunities?
What we design and how we design will directly guide and influence many of these outcomes. If we have confirmed the link to strategy, considered the relevance, and understood the context, then we have all the right ingredients to effectively deliver business value.
In the absence of these pieces, our value proposition won’t pass muster.
Stay tuned for my next blog in the Mobile Analytics Design series.
Even after nearly 25 years, I continue to be excited and passionate about security. I enjoy discussing my experiences, opinions, and crazy ideas with the community. I often respond to questions and comments on my blogs and in LinkedIn, as … Read more >
The post Advice to a Network Admin Seeking a Career in Cybersecurity appeared first on Intel Software and Services.
Even after nearly 25 years, I continue to be excited and passionate about security. I enjoy discussing my experiences, opinions, and crazy ideas with the community. I often respond to questions and comments on my blogs and in LinkedIn, as it is a great platform to share ideas and communicate with others in the industry. Recently I had responded to a Network Admin seeking a career in cybersecurity. With their permission, I thought I would share a bit of the discussion as it might be helpful to others.
Mr. Rosenquist – I have been in the Information Technology field as a network administrator for some 16 years and am looking to get into the Cyber Security field but the opportunity for someone that lacks experience in this specialized field is quite difficult. I too recognize the importance of education and believe it is critical to optimum performance in your field. What would your recommendation of suggested potential solutions be to break into this field? Thank you for your time and expertise.
Glad to hear you want to join the ranks of cybersecurity professionals! The industry needs people like you. You have a number of things going for you. The market is hungry for talent and network administration is a great background for several areas of cybersecurity.
Depending on what you want to do, you can travel down several different paths. If you want to stay in the networking aspects, I would recommend either a certification from SANS (or other reputable training organization with recognizable certifications) or dive into becoming a certified expert for a particular firewall/gateway/VPN product (ex. PaloAlto, CISCO, Check Point, Intel/McAfee, etc.). The former will give you the necessary network security credentials to work on architecture, configuration, analysis, operations, policy generation, audit, and incident response. The latter are in very high demand and specialize in the deployment, configuration, operation, and maintenance of these specific products. If you want to throw caution to the wind and explore areas outside of your networking experience, you can go for a university degree and/or security credentials. Both is better but may not be necessary.
I recommend you work backwards. Find job postings for your ‘dream job’ and see what the requirements are. Make inquiries about preferred background and experience. This should give you the insights to how best fill your academic foundation. Hope this helps. – Matthew Rosenquist
The cybersecurity industry is in tremendous need of more people with greater diversity to fill the growing number of open positions. Recent college graduates, new to the workforce, will play a role in satiating the need, but there remain significant opportunities across a wide range of roles. Experienced professionals with a technical, investigative, audit, program management, military, and analysis background can pivot into the cybersecurity domain with reasonable effort. This can be a great prospect for people who are seeking new challenges, very competitive compensation, and excellent growth paths. The world needs people from a wide range of backgrounds, experiences, and skills to be a part of the next generation of cybersecurity professionals.
An open question to my peers; what advice would you give to workers in adjacent fields who are interested in the opportunities of cybersecurity?
Today in Auckland, New Zealand, U.S. Trade Representative Michael Froman will take a critical step to advancing U.S. economic and innovation leadership around the world, while breaking down trade barriers with some of the fastest growing markets for U.S. businesses. … Read more >
The post Intel Commends the Signing of Trans-Pacific Partnership appeared first on Policy@Intel.
Here’s a prediction for 2016: The year ahead will bring the increasing “cloudification” of enterprise storage. And so will the years that follow—because cloud storage models offer the best hope for the enterprise to deal with unbounded data growth in a cost-effective manner.
In the context of storage, cloudification refers to the disaggregation of applications from the underlying storage infrastructure. Storage arrays that previously operated as silos dedicated to particular applications are treated as a single pool of virtualized storage that can be allocated to any application, anywhere, at any time, all in a cloud-like manner. Basically, cloudification takes today’s storage silos and turns them on their sides.
There are many benefits to this new approach that pools storage resources. In lots of ways, those benefits are similar to the benefits delivered by pools of virtualized servers and virtualized networking resources. For starters, cloudification of storage enables greater IT agility and easier management, because storage resources can now be allocated and managed via a central console. This eliminates the need to coordinate the work of teams of people to configure storage systems in order to deploy or scale an application. What used to take days or weeks can now be done in minutes.
And then there are the all-important financial benefits. A cloud approach to storage can greatly increase the utilization of the underlying storage arrays. And then there are the all-important financial benefits. A cloud approach to storage can greatly increase the utilization of the storage infrastructure; deferring capital outlays and reducing operational costs.
This increased utilization becomes all the more important with ongoing data growth. The old model of continually adding storage arrays to keep pace with data growth and new data retention requirements is no longer sustainable. The costs are simply too high for all those new storage arrays and the data center floor space that they consume. We now have to do more to reclaim the value of the resources we already have in place.
Cloudification isn’t a new concept, of course. The giants of the cloud world—such as Google, Facebook, and Amazon Web Services—have taken this approach from their earliest days. It is one of their keys to delivering high-performance data services at a huge scale and a relatively low cost. What is new is the introduction of cloud storage in enterprise environments. As I noted in my blog on non-volatile memory technologies, today’s cloud service providers are, in effect, showing enterprises the path to more efficient data centers and increased IT agility.
Many vendors are stepping up to help enterprises make the move to on-premises cloud-style storage. Embodiments of the cloudification concept include Google’s GFS and its successor Colossus, Facebook’s HDFS, Microsoft’s Windows Azure Storage (WAS), Red Hat’s Ceph/Rados (and GlusterFS), Nutanix’s Distributed File System (NDFS), among many others.
The Technical View
At this point, I will walk through the architecture of a cloud storage environment, for the benefit of those who want the more technical view.
Regardless of the scale or vendor, most of the implementations share the same storage system architecture. That architecture has three main components: a name service, a two-tiered storage service, and a replicated log service. The architectural drill-down looks like this:
The “name service” is a directory of all the volume instances currently being managed. Volumes are logical data containers, each with a unique name—in other words a namespace of named-objects. A user of storage services attaches to their volume via a directory lookup that resolves the name to the actual data container.
This data container actually resides in a two-tier storage service. The frontend tier is optimized for memory. All requests submitted by end-users are handled by this tier: metadata lookups as well as servicing read requests out of cache and appending write operations to the log.
The backend tier of the storage service provides a device-based, stable store. The tier is composed of a set of device pools, each pool providing a different class of service. Simplistically, one can imagine this backend tier supporting two device pools. One pool provides high performance but has a relatively small amount of capacity. The second pool provides reduced performance but a huge amount of capacity.
Finally, it is important to tease out the frontend tier’s log facility as a distinct, 3rd component. This is because this facility key to being able to support performant write requests while satisfying data availability and durability requirements.
In the weeks ahead, I will take up additional aspects of the cloudification of storage. In the meantime, you can learn about things Intel is doing to enable this new approach to storage at intel.com/storage.