How Intel® System Studio 2016 Helps Performance and Power Efficiency Soar
The number of connected smart devices is taking off—and expected to reach 50 billion by… Read more
How Intel® System Studio 2016 Helps Performance and Power Efficiency Soar
The number of connected smart devices is taking off—and expected to reach 50 billion by… Read more
When I mention interpreted languages and you wonder if this relates to interpretive dance, the answer is “no”, and my advice is “move along now.” Otherwise, read on.
At a tender young age, say… Read more
Sooner or later, you’ll be in one of your conference rooms with a sales person touting a mobile device they swear will be perfect for your business needs. Whether it is a smartphone, tablet or notebook PC, what you need … Read more >
The post How to Ask Your Sales Rep the Tough Questions about Rugged Mobile Devices appeared first on Grid Insights by Intel.
by Mark J Buxton, Intel® Media Development Products Director
Intel is a founding member of the Alliance for Open Media.
HEVC is the next-gen format used by the media and broadcasting… Read more
Dawn Moore, GM Networking Division
Data center application performance today uses balanced system performance based on a combination of CPU power, faster storage and high-throughput networks; upgrading just one of these elements will not maximize your data center performance.
This wasn’t always the case. In years past, some IT managers could postpone network upgrades because slow storage would limit overall system performance. But now, with much faster solid state-state drives (SSDs), the performance bottleneck has shifted from the hard drive to the network.
This means that in today’s IT environment—with hyperscale data centers and virtualized servers—it’s crucial that upgrading to the latest technology, like faster SSDs or 10/40GbE, be viewed from a comprehensive systems viewpoint.
Certainly, upgrading to a server with a new Intel® Xeon® Processor E5-2600 v3 CPU will provide improved performance. Similarly, swapping out a hard drive for an SSD or upgrading from 1GbE to 10GbE will improve performance.
Two recent whitepapers highlight how maximum performance depends on the interconnected nature of these systems. If the entire system isn’t upgraded, then the data center doesn’t get the best return from a new server investment.
The first paper* discusses the improvements in raw performance that can be seen in a complete upgrade. For example, when an older server with SATA SSDs and a single 10GbE NIC was replaced with a new Intel® Xeon® processor E5-2695 v3 based server, a PCIe SSD, and four 10GbE ports, the new system delivered 54% more transactions per minute and 42.4% more throughput, as well as much faster response times in these tests.
What can be done with this raw performance increase? The other whitepaper** answers that question by researching the increase in the number of virtual machines supported by an upgraded system.
With SDN in the data center, data center managers can facilitate the ramp up of new virtual machines (VMs) automatically as user needs grow. In the case illustrated in this paper, it was the ability to automatically spin up a VM and a new instance of Microsoft Exchange to support new email users. With all of this automation, the last thing that’s needed is for the infrastructure to restrict that flexibility.
In this example, a Dell PowerEdge R720 server replaced an older Dell PowerEdge R710 server-storage solution. These new systems featured the latest Intel® Xeon® processor, new operating system, SSD storage and Intel® Ethernet CNA X520 (10GbE) adapters. When the tests were finished, the new system supported 4.5 times more VMs than the previous system.
What is interesting to me is that the researchers measured the performance increase for each part of the upgrade—which really illustrates the point that these upgrades need to done comprehensively.
In this test, when the researchers upgraded just the CPU and the OS, they saw performance increase 275 percent. Not bad. But when they added the higher-performance SSDs to the new CPU and OS that resulted in a 325 percent improvement. And finally, when they added the new network adapters, overall VM density improvement climbed 450 percent compared to the original base system.
More details on both of these examples are available in the white papers referenced below.
When it’s time to invest in new servers, take a look at the rest of your system, which includes your Ethernet and storage sub-system, and think about the combination that will give you the best return on your investment.
As a neighbor and frequent visitor to California for work, family and play, the California drought and the millions of residents who are adjusting their water habits are frequently on my mind. At the end of May, I cycled from … Read more >
Governor Brown of California signed an executive order (Order B-34-15) establishing a California Cybersecurity Integration Center (Cal-CSIC) to align and improve the posture and resilience of the state’s cybersecurity strategy. The CSIS will coordinate across state agencies and include federal government partners. It will create a Cyber Incident Response Team and secure mechanisms to properly share appropriate information.
California is a massive state, with a huge economy, and heavily dependent on technology. Having a centralized capability to align and integrate resources is a fantastic concept. I applaud all the work which had to occur to get this legislation to this point. But the question remains, will the Cal-CSIC be a bureaucratic paper-tiger or will it have the necessary leadership, skills, and resources to forge a meaningful role in aligning a large and diverse team to prioritize and manage the state’s cyber risks?
As a Californian and a cybersecurity professional, I truly hope this organization can become the beacon which forges effective alliances of the security teams across the state. Currently, separate organizations are working independently, without the benefit of strong coordination, to manage their cyber risks. The challenges are immense and put the state at considerable risk. Citizens of the state have high expectations. California, a longtime bastion of technology innovation, has been a leader in securing citizen’s privacy, life-safety practices, and environment protection. Cybersecurity overlays and binds all these aspects and can contribute to the health, prosperity, and safety of every Californian.
This team will need very strong leadership to get all these groups to work together effectively. Otherwise it will become a detriment by adding unnecessary bureaucracy, without tangible benefits, to those groups trying to do the job independently. If California is able to get this right, it will be a huge win. If they get it wrong, it actually adds to the problems and hobbles all the current efforts underway.
Governor Brown, move carefully, but with purpose. I urge you to forego political appointments or service based promotions, instead get the right functional experts in place and make this a reality to protect California. Find leaders with expert cybersecurity strategic insights, superb communication abilities, and practical industry experience necessary to earn the peer respect of the cross-functional team and the private sector partners. This will be a very tough job with ambitious goals, but if done properly, it has the potential to set California apart and showcase the state’s innovation and effectiveness in cybersecurity operations and internal governance as a standard for the nation and world.
Intel Network: All My Previous Blog Posts
Our guest blogger for this post is Mike Reed, an investment director in Intel’s New Business Initiatives incubator, where he spends his time developing a pipeline of new business concepts, coaching and mentoring the teams that envision and validate those opportunities, … Read more >
Here’s an interesting disconnect: 84 percent of C-suite executives believe that the Internet of Things (IoT) will create new sources of revenue. However, only 7 percent have committed to an IoT investment.1 Why the gap between belief and action? P… Read more
Recently, a group of students from the Computer Science department at California Polytechnic State University (Cal Poly), were assigned to design and write programs to solve different High… Read more
By Barry Davis, General Manager, High Performance Fabrics Operation at Intel
Intel Omni-Path Architecture (Intel OPA) is gearing up to be released in Q4’15 which is just around the corner! As we get closer to our official release, things are getting real and we’re providing more insight into the fabric for our customers and partners. In fact, more Intel Omni-Path architectural level details were just presented on August 26th at Hot Interconnects. Before I talk about the presentation, I want to remind you that this summer at ISC ’15 in Germany, we disclosed the next level of detail and showcased the first Intel OPA public demo through the COSMOS supercomputer simulation.
For those who didn’t make it to Frankfurt, we talked about our evolutionary approach to building the next-generation fabric. We shared how we built upon key elements of Aries* interconnect and Intel® True Scale fabric technology while adding revolutionary features such as:
These features are a significant advancement because together they help deliver enhanced performance and scalability through higher MPI rates, lower latency and higher bandwidth. They also provide for improved Quality of Service (QoS), resiliency and reliability. In total, these feature are designed to support the next generation of data centers with unparalleled price/performance and capability.
At Hot Interconnects we provided even more detail. Our Chief OPA software architect, Todd Rimmer, gave an in-depth presentation on the architectural details of our forthcoming fabric. He delivered more insight into what makes Intel OPA a significant advancement in high performance fabric technology. He covered the major wire-level protocol changes responsible for the features listed above – specifically the layer between Layer 1 and Layer 2 coined as “Layer 1.5.” This layer provides the Quality of Service (QoS) and fabric reliability features that will help deliver the performance, resiliency, and scale required for our next-generation HPC deployments. Todd closed by keeping to his software roots by discussing how Intel is upping the ante on the software side with a discussion on Intel OPA software improvements, including the next-generation MPI optimized fabric communication library – Performance Scaled Messaging 2 (PSM2) and powerful new features for fabric management.
Check out the paper Todd presented for a deep dive into the details!
Stay tuned for more updates as the Intel® Omni-Path Architecture continues the run-up towards release in the 4th quarter of this year.
Take it easy
The Intel® Data Analytics Acceleration Library (Intel® DAAL), the high performance analytics (for “Big Data”) library for x86 and x86-64, is available for free for everyone (click here now to… Read more
In previous posts (here, here and here), I’ve written about the content of and process of creating the next revision of the Fortran standard, Fortran 2015. At the August 2015 joint WG5/J3 meeting in… Read more
At Intel, we get a lot of requests for feedback on resumes. While we can’t respond to each and every individual request, I wanted to share some of my best tips with everyone—feel free to pass them on! 1. Have … Read more >
The post Intel Student Center: Resume Tips for Students—and Other Job Seekers appeared first on Jobs@Intel Blog.
The Multi-OS engine rocks again with the latest update 2. The release is packed with some great features like:
Building your native iOS* UI from XCode Interface Builder Storyboard and code the… Read more
I wrote this post on my Intel Software Blog back on September 20, 2006. Unfortunately, the post was apparently too old, and I must have missed the notice that it was going to be unpublished…. Read more
SaaS is not new. It has been used for both business and personal use for some time and for a few years in its cloud form. So what sort of security changes are required to use SaaS in the enterprise? What SaaS challenges is Intel IT encountering? Why now? In this blog I share some real-life SaaS experiences, as a cloud and mobile security engineer at Intel, as well as my view of SaaS security.
The emergence of new and large use cases triggered Intel IT to reignite our SaaS architecture and look for more security solutions for the SaaS cloud space. Previously at Intel, use cases for cloud-based SaaS were small and limited to a few users. But the new use cases involved thousands of users, mainstream apps such as data repositories and collaboration, and big business models such as CRM. These large use cases required us to reexamine our SaaS strategy, architecture, and controls to protect those mass deployments. As documented in our recent white paper, these controls center mainly on data protection, authentication and access control, and logs and alerts. We strive to enforce these controls without negatively impacting the user experience and the time to market of SaaS solutions. The paper also discusses how we manage shadow IT—users accessing SaaS services without IT awareness.
While the white paper summarizes our SaaS security controls, I’d like to delve a bit deeper into cloud inspection.
As is often the case, the right approach wasn’t immediately apparent. We needed to examine the advantages and disadvantages of the various choices – sometimes a complicated process. We investigated two ways we could inspect activity and data:
We reached the conclusion that each solution needs to match the specific use case security requirements. Some SaaS implementations’ security requires more control, some less. Therefore we believe it is important to have a toolset where you can mix and match the needed security controls. And yes, you need to test it!
I’d like to hear from other IT professionals. How are you handling SaaS deployments? What controls have you implemented? What best-known methods have you developed, and what are some remaining pain points? I’d be happy to answer your questions and pass along our own SaaS security best practices. Please share your thoughts and insights with me – and your other IT colleagues on the IT Peer Network – by leaving a comment below. Join the conversation!
Back in 1993, when the first 7200-RPM hard drives hit the market, I imagine people thought they could never fill up its jaw-dropping 2.1GB capacity. Of course, that was before the era of MP3s and digital photos, and any videos you had were on VHS or Beta (or possibly laserdisc).
Today, desktop PCs like the ASUS K20CE* mini PC come with up to 3TB SSD drive to accommodate users’ massive collections of HD videos, photos, eBooks, recorded TV, and other huge files. That’s terabytes! Some feature even more storage.
But how do you access these files away from home? You could use one of the many cloud services on the market; however, if you have lots of personal photos and videos, or large documents and files from work, you’ll quickly reach the cap on the free capacity and have to start paying monthly subscription fees. Plus, you’d need to remember to upload files that you might want to access later to the cloud, and if you want to change services, move your files digitally from one network to another, which can be a hassle, not to mention a security concern.
A better option would be to take advantage of Intel ReadyMode Technology (Intel RMT) and third-party remote access software such as Splashtop, Teamviewer, or Microsoft Remote Desktop to turn your desktop PC into an always-available “personal cloud” that lets you access all of your files on your other devices, such as your smartphone or tablet.
“With RMT, your data is stored safely in your home computer so you don’t have to worry about people hacking into it. You can access it through remote log on or through VPN,” said Fred Huang, Product Manager, ASUS Desktop Division. “It’s a better way to access your personal files that exists today with ASUS systems running Intel RMT.”
Intel RMT replaces the traditional PC sleep state with a quiet, low power, OS-active state that allows PCs to remain connected, up-to-date, and instantly available when not in use. Plus, it allows background applications—like remote access software—to run with the display off while consuming a fraction of electricity it normally would when fully powered on.
“Cloud-based storage is usually more personal, so you might have a different account from your spouse or family member, but with a home hub PC, it can be one shared account that the whole family can access,” adds Huang.
For businesses, Intel RMT allows employees to use remote access to get to their work files from anywhere without the need for their desktops to remain fully awake and consuming power. Across a large enterprise, that kind of power savings really adds up.
Another business benefit: desktops with Intel RMT enable automatic backups and nightly system health checks to happen efficiently during off hours without waking the machines—saving power while protecting files and uptime.
ASUS desktop PC allow users to do everything from daily tasks to playing 4K ultra HD video with enhanced energy efficiency, better productivity and powerful performance across all its form factors. Other highlights include instant logins, voice activation, and instant sync and notifications.
And don’t forget about the gamers. RMT can help support game downloads and streaming sessions without wasting a lot of energy. Gamers can also choose to run updates and applications in the background 24/7, or overnight, and save time and energy by being connected to an energy-efficient smart home hub. Take a look at this recap video of the always available PC from IDF 2015 last month.
In addition to the ASUS K20 mentioned above, Intel RMT will also be featured by the future series or succeeding models for ASUS M32AD* tower PC, ASUS Zen AiO Z240IC* All-in-One, and the ASUS E510* mini PC.
Want to find out more about what Intel Ready Mode can do? Visit: www.intel.com/readymode.
The practice of using maliciously signed binaries continues to grow. Digitally signing malware with legitimate credentials is an easy way to make victims believe what they are downloading, seeing, and installing is safe. That is exactly what the malware writers want you to believe. But it is not true.
Through the use of stolen or counterfeit signing credentials, attackers can make their code appear trustworthy. This tactic works very well and is becoming ever more popular as a mechanism to bypass typical security controls.
The latest numbers from the Intel Security Group’s August 2015 McAfee Labs Threat Report reveals a steady climb in the total number of maliciously signed binaries spotted in use on the Internet. It shows a disturbingly healthy growth rate with total numbers approaching 20 million unique samples detected.
Although it takes extra effort to sign malware, it is worth it for the attackers. No longer an exclusive tactic of state-sponsored offensive cyber campaigns, it is now being used by cyber-criminals and professional malware writers, and is becoming a widespread problem. Signing allows malware to slip past network filters and security controls, and can be used in phishing campaigns. This is a highly effective trust-based attack, leveraging the very security structures initially developed to reinforce confidence when accessing online content. Signing code began as a way to thwart hackers from secretly injecting Trojans into applications and other malware masquerading as legitimate software. The same practice is in place for verifying content and authors of messages, such as emails. Hackers have found a way to twist this technology around for their benefit.
The industry has known of the emerging problem for some time. New tools and practices are being developed and employed. Detective and corrective controls are being integrated into host, data center, and network based defenses. But adoption is slow which affords a huge opportunity for attackers.
The demand for stolen certificates is rising. Driven by the increasing usage and partly by an erosion effect of better security tools and practices, which work to reduce the window of time any misused signature remains valuable. Malware writers want a steady stream of fresh and highly trusted credential to exploit. Hackers who breach networks are harvesting these valuable assets and we are now seeing new malware possess the features to steal credentials of their victims. A new variant of the hugely notorious Zeus malware family, “Sphinx”, is designed to allow cybercriminals to steal digital certificates. The attacker community is quickly adapting to fulfill market needs.
Maliciously signed malware is a significant and largely underestimated problem which undermines the structures of trust which computer and transaction systems rely upon. Signed binaries are much more dangerous than the garden variety of malware. Until effective and pervasive security measures are in place, this problem will grow in size and severity.
I feel very fortunate to be a part of the hugely exciting culture of innovation that is making its mark in Israel at the moment. The country has a reputation as fertile ground for start-up companies to flourish, but it’s also seeing a rapid pace of technological innovation. I recently returned to Israel after living abroad for a number of years, and the sheer scale of new development is amazing – even more so when you consider our relatively small population. Office blocks and research labs are shooting up, more and more high-end, high-value products are being manufactured, and investments and M&A activity are huge. I spoke to Guy Bar-Ner, regional sales director for Intel Israel, who told me that being part of the Intel Sales and Marketing team based in Israel means he has lots of opportunities to get involved with some of the most exciting developments and play a role in helping drive the industry forward.
To put this growth into perspective: there are currently 74 Israeli companies listed on Nasdaq, one of the largest representations for a non US country. The national economy is strong and the high-tech industry is doing well. It’s a great time to be in business here.
Guy said: “Being part of the Intel Sales and Marketing team based in Israel means I have lots of opportunities to get involved with some of the most exciting developments and play a role in helping drive the industry forward.
With a large (10,000-strong) presence, Intel Israel is in a strong position to help make a difference. We consolidated this position recently when we opened our IoT Ignition Lab in Tel Aviv. Our vision for the Lab is to provide local companies with the resources, space and tools they need to get their Internet of Things (IoT) ideas off the ground. This is the first time we’ve been able to offer such dedicated support to companies both large and small in the country, and after just two months of operation, it’s already showing promising results.
We offer companies that are innovating in the IoT space the opportunity to work with Intel’s technical experts to identify opportunities to develop their solutions on Intel® architecture, and then provide them with the resources to build or enhance their solutions, and a platform on which to showcase them to prospective customers through the Lab’s demo center.
The Lab focuses on four key pillars – Smart Cities, Smart Transportation, Smart Agriculture and Smart Home – but provides support and resources for any kind of IoT project that qualifies. At the moment, we’re working on a couple of exciting projects, including a Smart Cities solution from IPgallery, a Smart Transportation/Supply Chain solution from CartaSense and a personalized music solution from Sevenpop.
In addition to our work with local IoT companies, we’re using the IoT Ignition Labs to support Israel’s strong (and growing) maker/developer community. We have about 500 of these visionary folks just among the Intel Israel employees. They take part in many maker/developer hackathons and meet-up events during the year. The size of the overall Israel maker/developer community is amazing, holding up to ten meet-ups on various technology-related topics per week in the greater Tel Aviv area alone. The ideas that this community comes up with are fantastic – in fact it was a team from Israel that won first place in the Intel® Edison Make It Pro Challenge last year.
We’re keen to support these innovators by offering access to Intel resources and products to help them build the must-have solutions of tomorrow. We’ve been running hackathons to give them a forum in which to work together and come up with new ideas, and the winners of the hackathons are then welcomed into the Ignition Lab to work alongside the Intel experts to develop their idea into a marketable solution. In addition, the Intel Ingenuity Partner Program (IIPP) is a new program that is now up and running working a select few start-ups to help them build and market their Intel architecture-based solutions. The combination of the IIPP and the Intel IoT Ignition Lab is a fantastic way for start-ups to develop new and exciting solutions.
Engaging with the IoT Community
Meanwhile, we’re also taking the opportunity to drive further collaboration with the local community of start-ups and innovators at the upcoming DLD Innovation Festival, which is taking place in Tel Aviv in early September. For the first time, Intel will be taking part directly in this event, and we’ll be hosting a number of events and activities at the Intel Innovation building near the main entrance on September 8th and 9th – including
I invite everyone to come to the DLD event to experience Intel’s technology in action and engage with the people at Intel who are creating the future.”
To continue the conversation on Twitter, please follow us at @IntelIoT