Recently, a group of students from the Computer Science department at California Polytechnic State University (Cal Poly), were assigned to design and write programs to solve different High… Read more
Recent Blog Posts
By Barry Davis, General Manager, High Performance Fabrics Operation at Intel
Intel Omni-Path Architecture (Intel OPA) is gearing up to be released in Q4’15 which is just around the corner! As we get closer to our official release, things are getting real and we’re providing more insight into the fabric for our customers and partners. In fact, more Intel Omni-Path architectural level details were just presented on August 26th at Hot Interconnects. Before I talk about the presentation, I want to remind you that this summer at ISC ’15 in Germany, we disclosed the next level of detail and showcased the first Intel OPA public demo through the COSMOS supercomputer simulation.
For those who didn’t make it to Frankfurt, we talked about our evolutionary approach to building the next-generation fabric. We shared how we built upon key elements of Aries* interconnect and Intel® True Scale fabric technology while adding revolutionary features such as:
- Traffic Flow Optimization: provides very fine grained control of traffic flow and patterns by making priority decisions so important data, like latency sensitive MPI data, has an express path through the fabric and doesn’t get blocked by low priority traffic. This results in improved performance for high priority jobs and run-to-run consistencies are improved.
- Packet Integrity Protection: catches and corrects all single/multi-bit errors in the fabric without adding any additional latency like other error detection & correction technologies. Error detection/corrections is extremely important in fabrics running at the speed and scale of Intel OPA.
- Dynamic Lane Scaling: guarantees that a workload will gracefully continue to completion even if one or more lanes of a 4x link fail, rather than shutting down the entire link which was the case with other high performance fabric.
These features are a significant advancement because together they help deliver enhanced performance and scalability through higher MPI rates, lower latency and higher bandwidth. They also provide for improved Quality of Service (QoS), resiliency and reliability. In total, these feature are designed to support the next generation of data centers with unparalleled price/performance and capability.
At Hot Interconnects we provided even more detail. Our Chief OPA software architect, Todd Rimmer, gave an in-depth presentation on the architectural details of our forthcoming fabric. He delivered more insight into what makes Intel OPA a significant advancement in high performance fabric technology. He covered the major wire-level protocol changes responsible for the features listed above – specifically the layer between Layer 1 and Layer 2 coined as “Layer 1.5.” This layer provides the Quality of Service (QoS) and fabric reliability features that will help deliver the performance, resiliency, and scale required for our next-generation HPC deployments. Todd closed by keeping to his software roots by discussing how Intel is upping the ante on the software side with a discussion on Intel OPA software improvements, including the next-generation MPI optimized fabric communication library – Performance Scaled Messaging 2 (PSM2) and powerful new features for fabric management.
Check out the paper Todd presented for a deep dive into the details!
Stay tuned for more updates as the Intel® Omni-Path Architecture continues the run-up towards release in the 4th quarter of this year.
Take it easy
No Cost Options for Intel Data Analytics Acceleration Library (DAAL), Support yourself, Royalty-Free
The Intel® Data Analytics Acceleration Library (Intel® DAAL), the high performance analytics (for “Big Data”) library for x86 and x86-64, is available for free for everyone (click here now to… Read more
In previous posts (here, here and here), I’ve written about the content of and process of creating the next revision of the Fortran standard, Fortran 2015. At the August 2015 joint WG5/J3 meeting in… Read more
At Intel, we get a lot of requests for feedback on resumes. While we can’t respond to each and every individual request, I wanted to share some of my best tips with everyone—feel free to pass them on! 1. Have … Read more >
The post Intel Student Center: Resume Tips for Students—and Other Job Seekers appeared first on Jobs@Intel Blog.
The Multi-OS engine rocks again with the latest update 2. The release is packed with some great features like:
Building your native iOS* UI from XCode Interface Builder Storyboard and code the… Read more
I wrote this post on my Intel Software Blog back on September 20, 2006. Unfortunately, the post was apparently too old, and I must have missed the notice that it was going to be unpublished…. Read more
SaaS is not new. It has been used for both business and personal use for some time and for a few years in its cloud form. So what sort of security changes are required to use SaaS in the enterprise? What SaaS challenges is Intel IT encountering? Why now? In this blog I share some real-life SaaS experiences, as a cloud and mobile security engineer at Intel, as well as my view of SaaS security.
Matching Strategy to the Current Environment
The emergence of new and large use cases triggered Intel IT to reignite our SaaS architecture and look for more security solutions for the SaaS cloud space. Previously at Intel, use cases for cloud-based SaaS were small and limited to a few users. But the new use cases involved thousands of users, mainstream apps such as data repositories and collaboration, and big business models such as CRM. These large use cases required us to reexamine our SaaS strategy, architecture, and controls to protect those mass deployments. As documented in our recent white paper, these controls center mainly on data protection, authentication and access control, and logs and alerts. We strive to enforce these controls without negatively impacting the user experience and the time to market of SaaS solutions. The paper also discusses how we manage shadow IT—users accessing SaaS services without IT awareness.
How We Handle Cloud Traffic Inspection
While the white paper summarizes our SaaS security controls, I’d like to delve a bit deeper into cloud inspection.
As is often the case, the right approach wasn’t immediately apparent. We needed to examine the advantages and disadvantages of the various choices – sometimes a complicated process. We investigated two ways we could inspect activity and data:
- Cloud proxy. In this approach, we would pass all the traffic through a cloud proxy, which inspects the traffic and can also encrypt specific fields, a valuable process in controlling the traffic and information being passed to the cloud provider. The downside of this solution is that the traffic is directed through the cloud proxy, which might cause performance issues in massive cloud implementations, where the cloud provider has many presence points around the globe. Cloud proxies can also impact the application modules, in cases where reverse proxy is being used.
- Cloud provider APIs. This option uses the cloud provider’s APIs, an approach that allows inspection of user activity, data, and various attributes. The benefit of such an implementation is that it happens behind the scenes and doesn’t impact the user experience (because it is a “system-to-system” connection). But the downside of using APIs is that not all cloud providers offer the same set of APIs. Also, the use cases between SaaS providers can differ—requiring more time to fine-tune each implementation.
We reached the conclusion that each solution needs to match the specific use case security requirements. Some SaaS implementations’ security requires more control, some less. Therefore we believe it is important to have a toolset where you can mix and match the needed security controls. And yes, you need to test it!
I’d like to hear from other IT professionals. How are you handling SaaS deployments? What controls have you implemented? What best-known methods have you developed, and what are some remaining pain points? I’d be happy to answer your questions and pass along our own SaaS security best practices. Please share your thoughts and insights with me – and your other IT colleagues on the IT Peer Network – by leaving a comment below. Join the conversation!
Back in 1993, when the first 7200-RPM hard drives hit the market, I imagine people thought they could never fill up its jaw-dropping 2.1GB capacity. Of course, that was before the era of MP3s and digital photos, and any videos you had were on VHS or Beta (or possibly laserdisc).
Today, desktop PCs like the ASUS K20CE* mini PC come with up to 3TB SSD drive to accommodate users’ massive collections of HD videos, photos, eBooks, recorded TV, and other huge files. That’s terabytes! Some feature even more storage.
But how do you access these files away from home? You could use one of the many cloud services on the market; however, if you have lots of personal photos and videos, or large documents and files from work, you’ll quickly reach the cap on the free capacity and have to start paying monthly subscription fees. Plus, you’d need to remember to upload files that you might want to access later to the cloud, and if you want to change services, move your files digitally from one network to another, which can be a hassle, not to mention a security concern.
Access your data anytime, anywhere
A better option would be to take advantage of Intel ReadyMode Technology (Intel RMT) and third-party remote access software such as Splashtop, Teamviewer, or Microsoft Remote Desktop to turn your desktop PC into an always-available “personal cloud” that lets you access all of your files on your other devices, such as your smartphone or tablet.
“With RMT, your data is stored safely in your home computer so you don’t have to worry about people hacking into it. You can access it through remote log on or through VPN,” said Fred Huang, Product Manager, ASUS Desktop Division. “It’s a better way to access your personal files that exists today with ASUS systems running Intel RMT.”
Intel RMT replaces the traditional PC sleep state with a quiet, low power, OS-active state that allows PCs to remain connected, up-to-date, and instantly available when not in use. Plus, it allows background applications—like remote access software—to run with the display off while consuming a fraction of electricity it normally would when fully powered on.
“Cloud-based storage is usually more personal, so you might have a different account from your spouse or family member, but with a home hub PC, it can be one shared account that the whole family can access,” adds Huang.
For businesses, Intel RMT allows employees to use remote access to get to their work files from anywhere without the need for their desktops to remain fully awake and consuming power. Across a large enterprise, that kind of power savings really adds up.
Another business benefit: desktops with Intel RMT enable automatic backups and nightly system health checks to happen efficiently during off hours without waking the machines—saving power while protecting files and uptime.
The perfect home (and work) desktop
ASUS desktop PC allow users to do everything from daily tasks to playing 4K ultra HD video with enhanced energy efficiency, better productivity and powerful performance across all its form factors. Other highlights include instant logins, voice activation, and instant sync and notifications.
And don’t forget about the gamers. RMT can help support game downloads and streaming sessions without wasting a lot of energy. Gamers can also choose to run updates and applications in the background 24/7, or overnight, and save time and energy by being connected to an energy-efficient smart home hub. Take a look at this recap video of the always available PC from IDF 2015 last month.
In addition to the ASUS K20 mentioned above, Intel RMT will also be featured by the future series or succeeding models for ASUS M32AD* tower PC, ASUS Zen AiO Z240IC* All-in-One, and the ASUS E510* mini PC.
Want to find out more about what Intel Ready Mode can do? Visit: www.intel.com/readymode.
The practice of using maliciously signed binaries continues to grow. Digitally signing malware with legitimate credentials is an easy way to make victims believe what they are downloading, seeing, and installing is safe. That is exactly what the malware writers want you to believe. But it is not true.
Through the use of stolen or counterfeit signing credentials, attackers can make their code appear trustworthy. This tactic works very well and is becoming ever more popular as a mechanism to bypass typical security controls.
The latest numbers from the Intel Security Group’s August 2015 McAfee Labs Threat Report reveals a steady climb in the total number of maliciously signed binaries spotted in use on the Internet. It shows a disturbingly healthy growth rate with total numbers approaching 20 million unique samples detected.
Although it takes extra effort to sign malware, it is worth it for the attackers. No longer an exclusive tactic of state-sponsored offensive cyber campaigns, it is now being used by cyber-criminals and professional malware writers, and is becoming a widespread problem. Signing allows malware to slip past network filters and security controls, and can be used in phishing campaigns. This is a highly effective trust-based attack, leveraging the very security structures initially developed to reinforce confidence when accessing online content. Signing code began as a way to thwart hackers from secretly injecting Trojans into applications and other malware masquerading as legitimate software. The same practice is in place for verifying content and authors of messages, such as emails. Hackers have found a way to twist this technology around for their benefit.
The industry has known of the emerging problem for some time. New tools and practices are being developed and employed. Detective and corrective controls are being integrated into host, data center, and network based defenses. But adoption is slow which affords a huge opportunity for attackers.
The demand for stolen certificates is rising. Driven by the increasing usage and partly by an erosion effect of better security tools and practices, which work to reduce the window of time any misused signature remains valuable. Malware writers want a steady stream of fresh and highly trusted credential to exploit. Hackers who breach networks are harvesting these valuable assets and we are now seeing new malware possess the features to steal credentials of their victims. A new variant of the hugely notorious Zeus malware family, “Sphinx”, is designed to allow cybercriminals to steal digital certificates. The attacker community is quickly adapting to fulfill market needs.
Maliciously signed malware is a significant and largely underestimated problem which undermines the structures of trust which computer and transaction systems rely upon. Signed binaries are much more dangerous than the garden variety of malware. Until effective and pervasive security measures are in place, this problem will grow in size and severity.
I feel very fortunate to be a part of the hugely exciting culture of innovation that is making its mark in Israel at the moment. The country has a reputation as fertile ground for start-up companies to flourish, but it’s also seeing a rapid pace of technological innovation. I recently returned to Israel after living abroad for a number of years, and the sheer scale of new development is amazing – even more so when you consider our relatively small population. Office blocks and research labs are shooting up, more and more high-end, high-value products are being manufactured, and investments and M&A activity are huge. I spoke to Guy Bar-Ner, regional sales director for Intel Israel, who told me that being part of the Intel Sales and Marketing team based in Israel means he has lots of opportunities to get involved with some of the most exciting developments and play a role in helping drive the industry forward.
To put this growth into perspective: there are currently 74 Israeli companies listed on Nasdaq, one of the largest representations for a non US country. The national economy is strong and the high-tech industry is doing well. It’s a great time to be in business here.
Guy said: “Being part of the Intel Sales and Marketing team based in Israel means I have lots of opportunities to get involved with some of the most exciting developments and play a role in helping drive the industry forward.
With a large (10,000-strong) presence, Intel Israel is in a strong position to help make a difference. We consolidated this position recently when we opened our IoT Ignition Lab in Tel Aviv. Our vision for the Lab is to provide local companies with the resources, space and tools they need to get their Internet of Things (IoT) ideas off the ground. This is the first time we’ve been able to offer such dedicated support to companies both large and small in the country, and after just two months of operation, it’s already showing promising results.
We offer companies that are innovating in the IoT space the opportunity to work with Intel’s technical experts to identify opportunities to develop their solutions on Intel® architecture, and then provide them with the resources to build or enhance their solutions, and a platform on which to showcase them to prospective customers through the Lab’s demo center.
The Lab focuses on four key pillars – Smart Cities, Smart Transportation, Smart Agriculture and Smart Home – but provides support and resources for any kind of IoT project that qualifies. At the moment, we’re working on a couple of exciting projects, including a Smart Cities solution from IPgallery, a Smart Transportation/Supply Chain solution from CartaSense and a personalized music solution from Sevenpop.
In addition to our work with local IoT companies, we’re using the IoT Ignition Labs to support Israel’s strong (and growing) maker/developer community. We have about 500 of these visionary folks just among the Intel Israel employees. They take part in many maker/developer hackathons and meet-up events during the year. The size of the overall Israel maker/developer community is amazing, holding up to ten meet-ups on various technology-related topics per week in the greater Tel Aviv area alone. The ideas that this community comes up with are fantastic – in fact it was a team from Israel that won first place in the Intel® Edison Make It Pro Challenge last year.
We’re keen to support these innovators by offering access to Intel resources and products to help them build the must-have solutions of tomorrow. We’ve been running hackathons to give them a forum in which to work together and come up with new ideas, and the winners of the hackathons are then welcomed into the Ignition Lab to work alongside the Intel experts to develop their idea into a marketable solution. In addition, the Intel Ingenuity Partner Program (IIPP) is a new program that is now up and running working a select few start-ups to help them build and market their Intel architecture-based solutions. The combination of the IIPP and the Intel IoT Ignition Lab is a fantastic way for start-ups to develop new and exciting solutions.
Engaging with the IoT Community
Meanwhile, we’re also taking the opportunity to drive further collaboration with the local community of start-ups and innovators at the upcoming DLD Innovation Festival, which is taking place in Tel Aviv in early September. For the first time, Intel will be taking part directly in this event, and we’ll be hosting a number of events and activities at the Intel Innovation building near the main entrance on September 8th and 9th – including
- Speakers with new perspectives: Intel experts in areas such as IoT, wearables, video, media and connectivity will share their thoughts on a range of technology topics beyond Intel’s traditional business.
- Express Connect: We’ll be offering a match-up service for conference attendees to meet with Intel leaders and topic experts by appointment for more tailored, in-depth discussions.
- Showcase area: Some of the new and exciting Intel® technologies such as Intel® RealSense™ technology, Intel’s Wireless Connectivity, smart home and advanced analytics solutions will be on display as part of an ‘airport terminal of the future’ area.
- Live hackathon: Members of Intel’s own developer community will run an IoT-themed hackathon event using Intel Edison to find the next IoT Ignition Labs project. This will be run in collaboration with the Open Interconnect Consortium (OIC) and will highlight how the OIC and Intel are collaborating to create a smarter world.
I invite everyone to come to the DLD event to experience Intel’s technology in action and engage with the people at Intel who are creating the future.”
To continue the conversation on Twitter, please follow us at @IntelIoT
Check out the (non) CRC implementation below. What’s wrong with it?
I’m working on a connectivity library for IoT devices. A serious part of every communication protocol is the data integrity… Read more
There’s a lot of talk about Big Data in healthcare right now but for me the value of Big Data is not in the size of the data at all, the real value is in the analytics and what that can deliver to the patient. Healthcare reform is underpinned by a shift to value-based care where identifying best care, best treatment and best prognosis are all driven by business intelligence and business analytics.
I want to share my thoughts on this in a little more detail from a presentation I gave at the NHS England Health and Care Innovation Expo in Manchester, where Intel and Oracle highlighted some of the great work happening around identifying healthcare needs using predictive analytics.
Opportunities for Data Use in Healthcare are Rich
Everywhere I look in healthcare there seems to be an abundance of data, for example, it’s estimated that the average hospital generates 665TB of data annually. But it’s not just the volume of data that presents challenges, the variety of data means that the opportunities for its use are rich but often tempered by some 80 percent of that data being unstructured. Think X-rays, CT and 3D MRI scans as just one area where technology has vastly improved the quality of delivery of these services – but with a consequential exponential growth in resulting data.
Does more data really bring better care though? I’d argue that it’s the analysis of data that holds the key to solving some of the big challenges faced by providers across the world rather than how much data can be captured or accessed. With that in mind Intel and Oracle are working to help providers integrate, store and analyse data in better ways to deliver improved patient outcomes, including:
- Enabling early intervention and prevention
- Providing care designed for the individual
- Enhancing access to the care for the underserved
Our approach to developing solutions in this area encompasses several areas on the Big Data stack. There’s the core technology which covers the CPU’s, SSD, Flash, Fabrics, Networking and Security. And then there’s the investment in the Big Data platform which talks to the proliferation of Hadoop by making it easier to deploy. Finally, but no less important, are the analytics tools and utilities which help broaden analysis and accelerate application development.
Oracle and Project O-sarean Empowers Citizens
I’d like to highlight a couple of great examples where data sharing is helping to deliver active patient management. Oracle has played a part in the successful Project O-sarean in the Basque Country where the regional public healthcare system covers some 2.1m inhabitants with 80 percent of patient interactions related to chronic diseases. It has been predicted that by 2020 healthcare expenditure would need to double if systems and processes did not change. The results of this new multi-channel health service, powered by voluminous amounts of data, are impressive and include:
- Empowered citizens with access to Personal Health records
- Active patient monitoring for those with chronic diseases
- Health and drug advisory service providing evidence-based advice
The clinician benefits too as 11 acute hospitals, 4 chronic hospitals, 4 mental health hospitals, 1,850 GPs and 820 pharmacies are connected using Oracle solutions to collaborate through the sharing and analysis of patient data. This is a fantastic example of interoperability in healthcare. (Download a PDF from Oracle for more information on the Project O-sarean).
Intel helps Partners Deliver Predictive Analytics Innovations
Here at Intel we’ve been working with MimoCare to improve support for independent living with the Intel® Intelligent Gateway™. Through the use of sensors MimoCare technology will help the elderly remain safe living independently in their homes for longer. The use of analytics to identify normal patterns of behavior and predict events means that trigger alerts can be set at the family, friends and carers while the consolidation of aggregated data can help wider clinical research too. Read more on the great work of MimoCare and Intel’s role in the Internet of Things in Healthcare here.
I think you’ll find a recent blog by my colleague, Malcolm Linington, interesting too – he takes a look at how GPC are innovating to help guide wound care specialists to deliver the most effective treatment plan possible, develop standardized assessment practices, enhances clinical-based decision-making and ultimately provides cost-savings by streamlining wound care procedures.
I’m excited to share these stories with you as I feel we are only at the start of what is going to be a fantastic journey of using predictive analytics in healthcare. It would be great to hear about some of your examples so please do tweet us via @intelhealth or register and post a comment below.
Find Claire Medd RGN BSc (Hons) on LinkedIn.
The role of IT decision maker has dramatically changed in the past few decades, as technology continues to weave tightly into business strategy. IT leaders are helping business leaders build a successful roadmap by implementing strategies built on cloud, analytics, and new digital tools. Big initiatives, however, come with big decisions and the wherewithal to know which projects take priority.
We launched a poll on our Intel IT Center LinkedIn showcase page to find out what fires IT decision makers tend to extinguish first. The Internet is inundated with lists, blogs, and articles dedicated to top issues and concerns plaguing IT. These buzz-worthy topics include cloud, security, and big data, and we expected one of those to top the list.
Some IT Surprises
In our poll of more than 300 participants, 34 percent pinpointed hardware refresh as their top concern. Cloud structure (20 percent), software refresh (17 percent), and mastering data analytics (12 percent) rounded out the top four.
Security finished seventh with a little over 1 percent; this was one of the biggest surprises of the poll, especially with the large number of high-profile breaches and cybersecurity issues troubling enterprises of late. Cloud concerns were lower than projected as well, even after Microsoft’s recent release of Windows 10.
Some notables in the “Other” category (which accounted for 4 percent of the results) included customer-facing systems and hiring. Should IT be putting more thought into retaining talent, company culture, or customer needs?
IT Decision Makers Pick Hardware Over All Else
As noted, IT executives have a lot on their plate. The majority of respondents are focusing on topnotch hardware first — ditching legacy technology in favor of higher productivity, flexibility, and less downtime. The much-discussed data analytics, cloud, and security didn’t rank as high as we thought, but we’re more interested in knowing what you think. How would you rank your biggest concerns as an IT decision maker?
By RadhaKrishna Hiremane, Director of Marketing for SDI, Cloud, and Big Data at Intel Cloud is at the center of the advent of the digital service economy and the new wave of connected devices and IoT. As more workloads become cloud deployed soluti… Read more
More and more mobile devices are becoming connected with the software that runs on them. But the true value of mobility can’t be realized until these devices take advantage of the necessary integration among the underlying systems.
The same principles hold true for mobile business intelligence (BI). Therefore, when you’re developing a mobile BI strategy, you need to capitalize on opportunities for system integration that can enhance your end product. Typically, system integration in mobile BI can be categorized into three options.
Option One: Standard Mobile Features Expand Capabilities
Depending on the type of solution (built in-house or purchased), features are considered standard because they use existing and known capabilities on mobile devices such as e-mailing, sharing a link, or capturing a device screenshot. They provide methods of sharing mobile BI content, including collaboration without a lot of investment by development teams.
A typical example is the ability to share the report output with other users via e-mail by a simple tap of a button located on the report. This simple yet extremely powerful option allows immediate execution of actionable insight. Additional capabilities, such as annotating or sharing a specific section(s) of a report, add precision and focus to the message that’s being delivered or content shared. In custom-designed mobile BI solutions, the sharing via e-mail option can be further programmed to attach a copy of the report to an e-mail template, thereby eliminating the need for the user to compose the e-mail message from scratch.
Taking advantage of dialing phone numbers or posting content to internal or external collaboration sites is another example. An account executive (AE) could run a mobile BI report that lists the top 10 customers, including their phone numbers. Then, when the AE taps on the phone number, the mobile device will automatically call the number.
Option Two: Basic Integration with Other Systems Improves Productivity
A basic integration example is the ability to launch another mobile application from a mobile BI report. Unlike in Option One, this step requires the mobile BI report to pass the required input parameters to the target application. Looking at the same example of a top 10 customers report, the AE may need to review additional detail before making the phone call to the customer. The mobile BI report can be designed so that the customer account name is listed as a hotlink. When the AE taps the customer name, the CRM application is launched automatically and the account number is passed on, as well as the AE’s user credentials.
This type of integration can be considered basic because it provides automation for steps that the user could have otherwise performed manually: run the mobile BI report, copy or write down the customer account number, open the CRM app., log in to the system, and search for the account number. All of these are manual steps that can be considered “productivity leaks.” However, this type of integration differs from that described in Options One because there will be a handshake between the two systems that talk to each other. When using standard features, the report is attached to the e-mail message without any additional logic to check for anything else—hence, no handshake required.
Option Three: Advanced Integration with Other Systems Offers Maximum Value
Of the three options, this is the most complicated one because it requires a “true” integration of the systems involved. This category includes those cases where the handshake among the systems involved (it could be more than two) may require execution of additional logic or tasks that the end user may not be able to perform manually (unlike those mentioned in Option Two).
Taking it a step further, the integration may require write-back capabilities and/or what-if scenarios that may be linked to specific business processes. For example, a sales manager may run a sales forecast report and have the capability of manually overwriting one of the forecast measures. This action would then trigger multiple updates to reflect the change, not only on the mobile BI report but also on the source system. To make things more interesting, the update may need to be real time, a requirement that will further complicate the design and implementation of the mobile BI solution.
Bottom Line: System Integration Improves the Overall Value
No matter what opportunities for system integration exist, you must find a way to capitalize on them without, of course, jeopardizing your deliverables. You need to weigh the benefits and costs for these opportunities against your scope, timeline, and budget. If mobile BI is going to provide a framework for faster, better-informed decision making that will drive growth and profitability, system integration can become another tool in your arsenal.
Think about it. Besides – how can we achieve productivity gains if we’re asking our users to do the heavy lifting for tasks that could be automated through system integration?
Where do you see the biggest opportunity for system integration in your mobile BI strategy?
Stay tuned for my next blog in the Mobile BI Strategy series.
This story originally appeared on the SAP Analytics Blog.
Step into your vehicle. Imagine a ride with the road trip co-pilot of your dreams. Feel the cool breeze of the AC caressing your skin even though you haven’t touched the controls; your co-pilot knows your exact temperature preferences. Sink … Read more >
The post Intelligent Driving: Experience a Ride with Intel Internet of Things appeared first on IoT@Intel.
Cybercriminals are fully embracing ransomware. Ransomware, a specific form or malware, which encrypts files and extorts money from victims, is quickly becoming a favorite among criminals. It is easy to develop, simple to execute, and does a very good job at compelling users to pay in order to regain access to their precious files or systems. Almost anyone and every business is a potential victim. More importantly, people are paying. Even law enforcement organizations have fallen victim, only to cede defeat and pay the criminals to restore access to their digital files or computers.
In just the first half of 2015 the number of ransomware samples has exploded with a near ~190% gain. Compare that to the 127% growth for the whole of 2014. We predicted a spike in such personal attacks for this year, but I am shocked at how fast code development has been accelerated by the criminals.
Total ransomware has quickly exceeded 4 million unique samples in the wild. If the trend continues, by the end of the year we will have over 5 million types of this malware to deal with.
Cybercriminals have found a spectacular method of fleecing a broad community of potential victims. Ransomware uses proven technology to undermine security. Encryption, the long-time friend of cybersecurity professionals, can also be used by nefarious elements to cause harm. It is just a tool. How it is wielded determines if it is beneficial or caustic. In this case, ransomware uses encryption to scramble select data or critical systems files in a way only recoverable by a key they possess. The locked files never leave the system, but are unusable until decrypted. Attackers then offer to provide the key or an unlocking service for a fee. Normally in the hundreds of dollars, the fee is typically requested in the form of a cryptocurrency like Bitcoin. This makes the payment transaction un-revocable and almost impossibly difficult to track attribution and know who is on the receiving end.
This type of an attack is very personal in nature and specific in what it targets. It may lock treasured pictures, game accounts, financial records, legal documents, or work files. These are important to us personally or professionally and is a strong motivator to pay the criminals.
Payment simply reinforces the motivation to use this method again by the attackers and adds resources for continued investment in new tools and techniques. The technical bar for entry into this criminal activity is lowering as malware writers are making this type of attack easier for anyone to attempt. In June, the author of the TOX variant offered ransomware as a service. The criminal made available software for other criminals to distribute. It would handle all the back-end transactions and provide the author a 20% skim of ransoms being paid. Fortunately, the author was influenced to a better path after being exposed by Intel Security. More recently an open source kit, named Hidden Tear, was developed for novices to create their own fully function ransomware code. Although not too sophisticated, it is a watershed moment showing just how accessible making this type of malware is becoming. I expect future open source and software-as-a-service efforts to rapidly improve in quality, features, and availability.
Ransomware will continue to be a major problem. More sophisticated cybercriminals will begin integrating with other exploitation techniques such as malvertizing ad-services, malicious websites, bot uploads, fake software updates, waterhole attacks, spoofed emails, personalized phishing, signed Trojan downloads, etc. Ransomware will grow, more people and business will be affected, and it will become more difficult to recover without paying the ransom. The growth in new ransomware samples is an indication of things to come.
Today is an exciting day, not only for Intel but the technology industry at large, as we introduce the 6th Gen Intel® Core™ processor family. It is – hands down – Intel’s best processor ever, and with the near-simultaneous debut … Read more >
Today I gave a presentation to the NHS England Health and Care Innovation Expo alongside Dr. Jonathan Sheldon, Global VP Healthcare at Oracle where we discussed the role of precision medicine. I wanted to be able to share some of our thoughts from the session with a wider audience here in our Healthcare and Life Sciences community.
More specifically we talked through trends impacting healthcare and population health, what’s driving innovation to enable the convergence of precision medicine and population health and how we at Intel are working with Oracle on a shared vision.
Delivering Precision Medicine to Tackle Chronic Conditions
I’d like to underline all of what we discuss in precision medicine by reinforcing what I’ve said in a previous blog, that as somebody who spends a portion of my time each week working in a GP surgery, it’s essential that I am able to utilise some of the fantastic research outcomes to help deliver better healthcare to my patients. And for me, that means focusing in on the chronic conditions, such as diabetes, which are a drain on current healthcare resources.
The link between obesity and diabetes is well-known but it’s only when we see that 1/3rd of the global population are obese and every 30 seconds a leg is lost to diabetes somewhere in the world can we start to grasp the scale of the problem. The data we have available around diabetes in the UK highlights the scale succinctly:
- 1 in 7 hospital beds are taken up by diabetics
- 3.9m Britons have diabetes (majority Type 2, linked to obesity)
- 2.5m thought to have diabetes but not yet diagnosed
To combat the rise of diabetes there is some £14bn spent by the NHS each year treating the condition, including £869m spend by family doctors. What role can precision medicine play in creating a new standard of clinical care to help meet the challenges presented by chronic conditions such as diabetes?
Changing Care to Reduce Costs and Improve Outcomes
I see three changing narratives around care, all driven by technology. First, ‘Care Networking’ will see a move from individuals working in silos to a team-based approach across both organisations and IT systems. Second, ‘Care Anywhere’ means a move to more mobile, home-based and community care away from the hospital setting. And third, ‘Care Customization’ brings a shift from population-based to person-based treatment. Combine those three elements and I believe we have a real chance at tackling those chronic conditions and consequently reducing healthcare costs and improving healthcare outcomes.
How do we achieve better care at lower costs though from a technology point of view? This is where Intel and Oracle,with industry and customers, are working together to make this possible by overcoming the challenges of storing and analysing scattered structured and unstructured data, moving irreproducible manual analysis processes to reproducible analysis and unlocking performance bottlenecks through scalable, secure enterprise-grade, mission-critical infrastructure.
Convergence of Precision Medicine and Population Health
Currently we have two separate themes of Precision Medicine and Population Health around healthcare delivery. On the one hand Population Health is concerned with operational issues, cutting costs and resource allocation around chronic diseases, while Precision Medicine still very much operates in silos and is research-oriented with isolated decision-making. Both Intel and Oracle are focused on bringing together Precision Medicine and Population Health to provide a more integrated view of all healthcare related data, simplify patient stratification across care settings and deliver faster and deeper visibility into operational financial drivers.
Shared Vision of All-in-One Day Genome Analysis by 2020
We have a shared vision to deliver All-in-One Day primary genome analysis for individuals by 2020 which can potentially help clinicians deliver a targeted treatment plan. Today, we’re not quite at the point where I can utilize the shared learning and applied knowledge of precision medicine to help me coordinate care and engage my patients, but I do know that our technology is helping to speed up the convergence between healthcare and life sciences to help reduce costs and deliver better care.
Keep up-to-date with our healthcare and life sciences work by leaving your details here.