On April 10th, 2015, I was fortunate to travel from Hillsboro, Oregon to San Francisco, California especially to take part in the International NASA Space App Challenge hosted at Constant Contact…. Read more
Recent Blog Posts
The Johnny-Five robotics framework has made a big leap forward, migrating it’s primary point of presence away from creator Rick Waldron’s personal GitHub account to a brand new website: Johnny-Five.io. The new website features enhanced documentation, sample code and links … Read more >
The post The Johnny-Five Framework Gets A New Website, Add SparkFun Support appeared first on Intel Software and Services.
Authors of PVS-Studio static code analyzers offer programmers to test their sight and to try finding errors in C/C++ code fragments.
Code analyzers work tirelessly and are able to find many bugs… Read more
This month’s hardware seeding contest is for one of two (2) Cyberpower Zeus Hercules laptops. It comes equipped with a 4th Gen Core i7 processor and Intel® Iris™ Pro Graphics – our highest end… Read more
(This is a guest post from 2013 Level Up Game Contest winner Greg Lobanov.)
In 2013, I won the grand prize in the Intel LevelUp Game Demo contest with my game, Perfection. The resulting… Read more
The Year of the Goat, a symbol of good fortune, marks the celebration of global ecosystem collaboration to advance the Internet of Things. To encourage collaboration at an unprecedented level, Intel announced the opening of the first Intel IoT lab … Read more >
The post Intel Champions Internet of Things Collaborations at IDF Shenzhen appeared first on IoT@Intel.
This week I was invited to join AWS Solutions Architects and a group of 40 developers at the AWS Pop-up Loft in San Francisco. The goal? Show developers how AWS can be used to power the Internet of… Read more
This week I was invited to join AWS Solutions Architects and a group of 40 developers at the AWS Pop-up Loft in San Francisco. The goal? Show developers how AWS can be used to power the Internet of Things with … Read more >
Last week the OpenDayLight community held a Hackfest in Santa Clara which is held just prior to the Code-Freeze for the current release. OpenDayLight is zeroing in on the Lithium release and this… Read more
As part of Phoenix Urban Design Week 2015 activities I attended the local premiere of “Makers” at Filmbar in downtown Phoenix. Preceded by a short bike ride, and followed by a short discussion… Read more
With the proliferation of popular software-as-a-service (SaaS) offerings, the scale of compute has changed dramatically. The boundaries of enterprise IT now extend far beyond the walls of the corporate data center.
You might even say those boundaries are disappearing altogether. Where we once had strictly on-premises IT, we now have a highly customized and complex IT ecosystem that blurs the lines between the data center and the outside world.
When your business units are taking advantage of cloud-based applications, you probably don’t know where your data is, what systems are running the workloads, and what sort of security is in place. You might not even have a view of the delivered application performance, and whether it meets your service-level requirements.
This lack of visibility, transparency, and control is at once both unsustainable and unacceptable. And this is where IT analytics enters the picture—on a massive scale.
To make a successful transition to the cloud, in a manner that keeps up with the evolving threat landscape, enterprise IT organizations need to leverage sophisticated data analytics platforms that can scale to hundreds of billions of events per day. That’s not a typo—we are talking about moving from analyzing tens of millions of IT events each day to analyzing hundreds of billions of events in the new enterprise IT ecosystem.
This isn’t just a vision; this is an inevitable change for the IT organization. To maintain control of data, to meet compliance and performance requirements, and to work proactively to defend the enterprise against security threats, we will need to gain actionable insight from an unfathomable amount of data. We’re talking about data stemming from event logs, network devices, servers, security and performance monitoring tools, and countless other sources.
Take the case of security. To defend the enterprise, IT organizations will need to collect and sift through voluminous amounts of two types of contextual information:
- “In the moment” information on devices, networks, operating systems, applications, and locations where information is being accessed. The key here is to provide near-real time actionable information to policy decision and enforcement points (think of the credit card company fraud services).
- “After the fact” information from event logs, raw security related events, netflow and packet data, along with other indicators of compromise that can be correlated with other observable/collectable information.
As we enter this brave new world for IT, it’s clear that we will need an analytics platform that will allow us to store and process data at an unprecedented scale. We will also need new algorithms and new approaches that will allow us to glean near real-time and historical insights from a constant flood of data.
In an upcoming post, I will look at some of the requirements for this new-era analytics platform. For now, let’s just say we’re gonna need a bigger boat.
Intel and the Intel logo are trademarks of Intel Corporation in the United States and other countries. * Other names and brands may be claimed as the property of others.
I have studied numbers of errors caused by using the Copy-Paste method and can assure you that programmers most often tend to make mistakes in the last fragment of a homogeneous code block. I have… Read more
Yesterday we celebrated the 50th anniversary of Moore’s Law, the foundational model of computing innovation. While the past half-century of industry innovation based on the advancement of Moore’s Law is astounding, what’s exciting today is that we’re at the beginning of the next generation of information and communication technology architecture, enabling the move to the digital services economy. Nowhere are the opportunities more acute than in the data center.
This evening, at the Code/Enterprise Series in San Francisco, I had the pleasure of sharing Intel’s perspective on the disruptive force the data center transformation will have on businesses and societies alike. Like no time before, the data center stands at the heart of technology innovation connecting billions of people and devices across the globe and delivering services to completely transform businesses, industries, and people’s lives.
To accelerate this vision Intel is delivering a roadmap of products that enable the creation of a software-defined data center – a data center where the application defines the system. One area I’m particularly excited about is our work with the health care community to fundamentally change the experience of a cancer patient. Here, technology is used to speed up and scale the creation and application of precision medicine.
Our goal? By 2020, a patient can have her cancerous cells analysed through genome sequencing, compared to countless other sequences through a federated, trusted cloud, and a precision treatment created… all in one day.
We are also expanding our business focus into new areas where our technology can accelerate business transformation, a clear example being the network. Our recent announcements with Ericsson and Huawei highlight deep technical collaborations that will help the telco industry deliver new services to their end users with greater network utilization through virtualization and new business models through the cloud. At the heart of this industry transformation is open, industry standard solutions running on Intel architecture.
Transforming health care and re-architecting the network are just two examples of Intel harnessing the power of Moore’s Law to transform businesses, industries, and the lives of us all.
Small Business, Big Threat, Security Connected Solutions: How Innovation and the Framework Can Help Protect Small Businesses
On April 22 – right in the middle of “Cyber Week” in the U.S. House of Representatives – Steve Grobman, Intel Fellow and Intel Security’s CTO, will testify before the House Committee on Small Business to discuss how, from Intel’s … Read more >
The transition toward next-generation, high-throughput genome sequencers is creating new opportunities for researchers and clinicians. Population-wide genome studies and profile-based clinical diagnostics are becoming more common and more cost-effective. At the same time, such high-volume and time-sensitive usage models put more pressure on bioinformatics pipelines to deliver meaningful results faster and more efficiently.
Recently, Intel worked closely with Seven Bridges Genomics’ bioinformaticians to design the optimal genomics cluster building block for direct attachment to high-throughput, next-generation sequencers using the Intel Genomics Cluster solution. Though most use cases will involve variant calling against a known genome, more complex analyses can be performed with this system. A single 4-node building block is powerful enough to perform a full transcriptome. As demands grow, additional building blocks can easily be added to a rack to support multiple next-generation sequencers operating simultaneously.
Verifying Performance for Whole Genome Analysis
To help customers quantify the potential benefits of the PCSD Genomics Cluster solution, Intel and Seven Bridges Genomics ran a series of performance tests using the Seven Bridges Genomics software platform. Performance for a whole genome pipeline running on the test cluster was compared with the performance of the same software platform running on a 4-node public cloud cluster based on the previous generation Intel Xeon processor E5 v2 family.
The subset of the pipeline used for the performance tests includes four distinct computational phases:
- Phase A: Alignment, deduplication, and sorting of the raw data reads
- Phase B: Local realignment around Indels
- Phase C: Base quality score recalibration
- Phase D: Variant calling and variant quality score recalibration.
The results of the performance tests were impressive. The Intel Genomic Cluster solution based on the Intel® Xeon processor E5-2695 v3 family completed a whole genome pipeline in just 429 minutes versus 726 minutes for the cloud-based solution powered by the prior-generation Intel® Xeon processor E5 v2 family.
Based on these results, researchers and clinicians can potentially complete a whole genome analysis almost five hours sooner using the newer system. They can also use this 4-node system as a building block for constructing large, local clusters. With this strategy, they can easily scale performance to enable high utilization of multiple high-volume, next-generation sequencers.
For a more in-depth look at these performance tests, we will soon release a detailed abstract that will provide more detailed information about the workloads and system behavior in each phase of the analysis.
What questions do you have?
As Moore’s Law turned 50 this past Sunday, it’s amazing to see how much this ‘law’ is impacting the energy industry. Back in 1975, Gordon Moore extrapolated that computing would dramatically increase in power, and decrease in relative cost, at an … Read more >
The post Digitization of the Utility Industry Part I: The Impact of Moore’s Law appeared first on Grid Insights by Intel.
I have unintentionally raised a large debate recently concerning the question if it is legal in C/C++ to use the &P->m_foo expression with P being a null pointer. The programmers’ community… Read more
The Role of Technology in Retail: Is it enough?
My friend and colleague, Jon Stine, @joncstine1 recently penned a blog regarding technology in retail. Jon has extensive retail industry and technology expertise and offers a great perspective on the role of technology to address challenges retailers face. The challenge from my perspective for retailers with store fronts, is best summed in a question. Can you deliver a killer shopping experience – from sofa-to-store aisle?” Sadly many retailers are not able to answer yes this question. The vast majority of the retailers don’t invest in innovative shopping solutions. Many retailers are content to follow the same old formula of price discounting, coupons and Sunday circulars. A well proven formula that is just too hard to break from. However, it is a formula we know no longer fits the new connected consumer.
The evolution of the connected consumer has been highlighted in popular press at great length for at least the last three plus years. However, many retailers have missed the evolution of the consumer. You know, the Millenials generation. It will comprise 75% of the workforce by 2020 and command over $2.5T in purchasing power. This segment is always connected and values experiences as much as price. And by the way, since they are always connected, they never shop without their device. Yes, the evolution has been occurring for some time and yes the Millenials are reshaping the shopping experience. What are you going to do about it?
Retailers are facing a strategic inflection point, which could mean an opportunity to prosper or a slow ride toward demise. At least that is my point-of-view. Jon is arguing a few factors that are relevant to creating an innovative shopping experience.
- “Showrooming” is multidirectional (in-store and online) and it is here to stay.
- Leveraging big data can have a profound impact on your brand and the experience you deliver – it should be considered as the starting point as you create a new shoppers journey.
- Security must become a strategic imperative for the way you conduct business – trust is won in drips and lost in buckets. Cybercriminals are well funded and profitable and hacking will continue as the new normal.
As mentioned in Jon’s blog, retailers have long chosen to focus on maintaining their ongoing operations, rather than investing for growth and innovation. Growth and Innovation don’t come cheap. As a matter of fact, growth and innovation are more than a technology roadmap. It is a business strategy. Why do consumers flock to Amazon or any of the “new concept” stores? I argue it boils down to the experience. Amazon provides the ultimate in clienteling and sales assist. The new concept stores I had the privilege to tour in NYC offer innovative shopping experiences. My store tour was during NRF 2015.
The collection of stores we visited all offered a unique & engaging shopping experiences.
Rebecca Minkoff – connected fashion store with technology envisioned and planed by EBAY. Bringing together the online experience to the physical store. In store interactive shopping display. Once the shopper selects the clothes they want to try on they tap a button to have a sales associate bring the items to a dressing room.
Under Armour Brand House – to create a physical space that becomes a destination for shoppers. The strategy for the stores is more about telling a story and engaging the shopper through story telling. UA founder Kevin Plank is more interested in aligning its product communication and retail presentation than anything else. His claim is that UA focuses 80% on storytelling and 20% on product – just the opposite of so many other product retailers.
Converse – yup that old classic, Chuck Taylor canvas shoe. Converse has been offering online customization for some time. But what if you wanted an immediate and unique shoe to wear to an event. Now you can visit a Converse store, select your favorite Chuck’s and then set off in creating your own personalized style.
Similar to the way Amazon offers a unique shopping experience these stores invested in delivering an innovation. It wasn’t a technology solution alone – it was a desire from top to bottom to give the shopper something unique and innovative.
Do you want help in delivering growth and innovation in your retail environment? Intel isn’t going to solve all of this on its own. We work with very talented fellow travelers that offer solutions to achieve growth and innovation.
Graylish, Gordon, Edgecombe, Paul J, Steuart, Ann M, Walsh, Megan A, Emde, Charles, Yep, Ray S, Snyder, Don P, Martin, Lisa A,Malloy, Steve,, Julie, Cavallo, Jerry,Phillips, Todd,, Dastghaib, Hooman, Gledhill, Alexander N, Karolkowski, Gilles,Aillerie, Yves,Horsthemke, Uwe, Calandra, Joseph, Vandenplas, Patricia, , Gary,Shean, Robyn, Bakkeren, Matty,Butcher, Paul,Brown, Steve PowerYep, Ray S, Bhasin, Rahoul, Dastghaib, Hooman , Cangro-essary-coons, Lisa, Archer, Darin,Robason, Kelly,Pitarresi, Joe, Nickles, Annabel SDastghaib, Hooman, Peutin, Florence @Ward, Matthew, @Williams, La Tiffaney, @Fox, Tania, @Lester, Ryan, @Weiskus, Sarah,, Mushahwar, Rachel K, Poulin, Shannon, @Tea, Peter, @Webb, Victor; Laura Barbaro -, Pattie.Sims@intel.com,
FLOPS means total floating point operations per second, which is used in High Performance Computing. In general, Intel(R) VTune(TM) Amplifier XE
only provides metric named Cycles Per Instruction… Read more
Part 1: 8 Ways to Secure Your Cloud Infrastructure
Cloud security remains a top concern for businesses. Fortunately, today’s data center managers have an arsenal of weapons at their disposal to secure their private cloud infrastructure.
Here are eight things you can use to secure your private cloud.
1. AES-NI Data Encryption
End-to-end encryption can be transformational for the private cloud, securing data at all levels through enterprise-class encryption. The latest Intel processors feature Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI), a set of new instructions that enhance performance by speeding up the execution of encryption algorithms.
The instructions are built into Intel® Xeon server processors as well as client platforms including mobile devices.
When encryption software utilises them, the AES-NI instructions dramatically accelerate encryption and decryption – by up to 10 times compared with software-only AES.
This speedy encryption means that it is possible to incorporate encryption across the data centre without significantly impacting infrastructure performance.
2. Security Protocols
By incorporating a range of security protocols and secure connections, you will build a more secure private cloud.
As well as encrypting data, clouds can also use cryptographic protocols to secure browser access to the customer portal, and to transfer encrypted data.
For example, Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols are used to assure safe communications over networks, including the Internet. Both of these are widely used for application such as secure web browsing, through HTTPS, as well as email, IM and VoIP.
They are also critical for cloud computing, enabling applications to communicate over the network and throughout the cloud while preventing undetected tampering that modifies content, or eavesdropping on content as it’s transferred.
3. OpenSSL, RSAX and Function Stitching
Intel works closely with OpenSSL, a popular open source multiplatform security library. OpenSSL is FIPS 140-2 certified: a computer security standard developed by the National Institute of Standards and Technology Cryptographic Module Validation Program.
It can be used to secure web transactions through services such as Gmail, e-commerce platforms and Facebook, to safeguard connections on Intel architecture.
Two functions of OpenSSL, that Intel has contributed to, are RSAX and function stitching.
The first is a unique implementation of the popular RSA 1024-bit algorithm, and produces significantly better performance than previous OpenSSL implementations. RSAX can accelerate the time it takes to initiate an SSL session – up to 1.5 times. This provides a better user experience and increases the number of simultaneous sessions your server can handle.
As for function stitching: bulk data buffers use two algorithms for encryption and authentication, but rather than encrypting and authenticating data serially, function stitching interleaves instructions from these two algorithms. By executing them simultaneously, it improves the utilisation of execution resources and boosts performance.
Function stitching can result in up to 4.8 times performance improvement for secure web servers when combined with RSAX and Intel AES-NI.
4. Data Loss Prevention (DLP)
Data protection is rooted in the encryption and secure transfer of data. Data loss prevention (DLP) is a complementary approach focused on detecting and preventing the leakage of sensitive information, either by malicious intent or inadvertent mistake.
DLP solutions can profile content against rules and capture violations or index and analyse data to develop new rules. IT can establish policies that govern how data is used in the organisation and by whom. By doing this they can clarify security practices, identify potential fraud and avert accidental or unauthorised malicious transfer of information.
An example of this technology is McAfee Total Protection for Data Loss Prevention. This software can be used to support an organisation’s governance policies.
Protecting your platform begins with managing the users who access your cloud. This is a large undertaking because of the array of external and internal applications, and the continual churn of employees.
Ideally, authentication is strengthened by routing it in hardware. With Intel Identity Protection Technology (Intel IPT), Intel has built tamper-resistant, two-factor authentication directly into PCs based on third-generation Intel core vPro processors, as well as Ultrabook devices.
Intel IPT offers token generation built into the hardware, eliminating the need for a separate physical token. Third-party software applications work in tandem with the hardware, strengthening the authentication process.
Through Intel IPT technology, businesses can secure their access points by using one-time passwords or public key infrastructure.
6. API-level Controls
Another way in which you can secure your cloud infrastructure is by enforcingAPI-level controls. The API gateway layer is where security policy enforcement and cloud service orchestration and integration take place. An increased need to expose application services to third parties, and mobile applications is driving the need for controlled, compliant application service governance.
WithAPI-level controls, you gain a measure of protection for your departmental and edge system infrastructure, and reduce the risk of content-born attacks on applications.
Intel Expressway Service Gateway is an example of a scalable software appliance that provides enforcement points and authenticates API requests against existing enterprise identity and access management system.
7. Trusted Servers and Compute Pools
Because of cloud computing’s reliance on virtualisation, it is essential to establish trust in the cloud. This can be achieved by creating trusted servers and compute pools. Intel Trusted Execution Technology (TXT) builds trust into each server, at the server level, by establishing a root of trust that helps assure system integrity within each system.
The technology checks hypervisor integrity at launch by measuring the code of the hypervisor and comparing it to a known good value. Launch can be blocked if the measurements do not match.
8. Secure Architecture Based on TXT
It’s possible to create a secure cloud architecture based on TXT technology, which is embedded in the hardware of Intel Xeon processor-based servers. Intel TXT works with the layers of the security stack to protect infrastructure, establish trust and verify adherence to security standards.
As mentioned, it works with the hypervisor layer, and also the cloud orchestration layer, the security policy management layer and the Security Information and Event Management (SIEM), and Governance, Risk Management and Compliance (GRC) layer.
Cloud security has come a long way. It’s now possible, through the variety of tools and technologies outlined above, to adequately secure both your data and your user. In so doing, you will establish security and trust in the cloud and gain from the agility, efficiency and cost savings that cloud computing brings.