Recent Blog Posts

Get More Out of Your TV Experience

Imagine a device that could fit in the palm of your hand that allowed you to stream content, play light games, and check your social media or email on any HDMI television.  That would be pretty convenient, wouldn’t it? Welcome to the new Intel®-based compute stick.


Get More Out of Your TV

Compute_Social_TwitterStick.png

 

From living rooms to hotel rooms, big screen HDTVs are practically everywhere. But what if your television could do more? With a new compute stick—featuring an Intel® processor, built-in Windows 10 OS, on-board storage, Wi-Fi, and Bluetooth, you can stream content, access rich media, play light games, and even check your favorite social media sites on the biggest screen in the house. Picture richer experiences and limitless entertainment options as an extension of your regular TV, which also lets you stay connected to the rest of the world.

 

“With the compute stick, there are virtually no limits to what you can get on your TV as far as the type of entertainment you want to access,” said Xavier Lauwaert, category manager at Intel Client Computing Group. “It’s one of the rare devices that enables you to easily connect online and start watching your favorite movies or soap operas, anywhere in the world, from the comfort of your couch or your hotel room.”

 

Given its size, you can also take your personal content with you and safely store your personal files locally with built-in storage and a Micro SD slot so always-on Internet is not required to enjoy your content.

“I like to say a Compute stick gives your TV a college degree and your smart TV a PhD. As long as it has a website, you can access almost anything, anywhere,” Lauwaert adds.

 

Enrich Streaming and Gaming

Computer_Social_TwitterHero.png

 

The compute stick lets you access the full universe of streaming entertainment services that you can watch on any web browser. Fans of Netflix or YouTube can live stream movies, music, or video. That’s pretty useful if you’re on the road and need to catch up on the latest zombie apocalypse drama or have free time at home to binge watch an entire season.

“As a Windows device, you can even access premium content through iTunes,” adds Lauwaert. “It’s the only non-iOS device that enables access to that type of content on something that’s smaller than the size of your palm.”

For light gamers, the compute stick connects easily to the Windows Store, which boasts a pretty exhaustive library of Windows 10 titles such as Minecraft: Windows 10 Edition Beta, Age of Empires: Castle Siege and Angry Birds. More and more titles are added to the store every day, so now, consumers can enjoy light gaming in any room of the house and rack up those Xbox Achievements along the way.

 

“Windows 10 includes a dedicated Xbox app, which is the gateway to the Xbox Live ecosystem,” Lauwaert said. “Whether you own an Xbox One or not, the app provides a tool to connect gamers and sharing content through the compute stick. For example, gamers can also record and clip their most epic gaming moments with built-in Game DVR and then share them with their Xbox Live friends.”

 

Multitask MassivelyCompute_Social_TwitterContent.png

 

Stream content, check email, browse the web, and stay connected on social media—all at the same time and on the same huge HDTV screen. The Intel®-based compute stick with Windows 10 includes a feature to split your TV screen or monitor so you can multitask to your heart’s content.

 

“One of the cool things about Windows 10 is the updated Snap Assist feature,” adds Lauwaert. “You can snap a window and resize it to two-thirds of your TV screen for the content you’re watching, and then snap a second window to automatically fill in the available space for your social media profiles.”

 

“For example, anyone can watch the national basketball tournament on one side, and on the other side, you’ve got your Twitter feed so you can follow the online conversation about the game.”

 

If you’re ready to get more out of your HDTV, download this handy Flash Card to learn more.

 

Read more >

The Visual Cloud Second Wave

In just the last four months, Microsoft announced its Hololens VR headset, Google launched its VR view SDK that allows users to create interactive experiences from their own content, Facebook expanded its live video offering, Yahoo announced that it will live stream 180 Major League Baseball games, Twitter announced it will live stream 10 NFL games, Amazon acquired image recognition startup Orbeus and Intel acquired immersive sports video startup Replay Technologies.


Are these events unrelated or are they part of something bigger? To me, they indicate the next wave of the Visual Cloud. The first wave was characterized by the emergence of Video on Demand (e.g., Netflix), User Generated Video Content (e.g., YouTube) and MMORPG (e.g., World of Warcraft). The second phase will be characterized by virtual reality, augmented reality, 3D scene understanding and interactivity and immersive live experiences. To paraphrase William Gibson, the announcements I listed above indicate that the future is already here – it’s just not evenly distributed. And it won’t take long for it to spread to the mainstream – remember that YouTube itself was founded in 2005 and NetFlix only started streaming videos in 2007. By 2026, the second wave will seem like old technology. In the technology world, in five years, nothing changes; in ten years, everything changes.

 

But why now? As with any technology, a new wave requires the convergence of two things: compelling end user value and technology capability and maturity.

 

It’s pretty clear that this wave can provide enormous user value. One early example is Google Street View (launched 2007). I’m looking for a new house right now and I can’t tell you how much time I’ve saved not touring houses that are right next to a theater or service station or other unappealing neighbor. While this is a valuable consumer application, the Visual Cloud also unlocks many business and public sector applications like graphics-intensive design and modelling applications and cloud-based medical imaging.

 

But, is the technology ready? The Visual Cloud Second Wave is an integration of several technologies – some are well established, some still emerging. The critical remaining technologies will mature over the next few years – driving widespread adoption of the second wave applications and services. In my opinion, the key technologies are (in decreasing order of maturity):

 

  1. Cloud Computing – the Visual Cloud requires capabilities that only cloud computing can deliver. In most ways, the Visual Cloud First Wave proved out this technology. These capabilities include:

 

    • Massive, inexpensive, on-demand computing. Even something as comparatively simple as speech recognition (think Siri, Google Now, Cortana) requires the scale of the cloud to make it practical. Imagine the scale of compute required to support real time global video recognition for something like traffic management.

     

      • Massive data access and storage capacity. Video content is big – a single high quality 4k video requires 30-50 GB of storage, depending on how it compressed.

       

        • Ubiquitous access. Many Visual Cloud applications are about sharing content between one user and another regardless of whether they might be in the world or what devices they are using to create and consume content.

         

          • Quick Start Development. The easy access to application development tools and resources through Infrastructure as a Service (IaaS) offerings like Amazon Web Services and Microsoft Azure make it much faster for innovative Visual Cloud developers to create new applications and services and get them out to users.

           

          1. High Speed Broadband. See above re: Video Content is Big. Even today, moving video data around is a challenge for many service providers. Video is already over 64% of consumer internet traffic and is expected to grow to over 80% by 2019. High quality visual experiences also require relatively predicable bandwidth. Sudden changes in latency and bandwidth wreak havoc on visual experiences even with compensating technologies like HLS and MPEG-DASH. This is especially true for interactive experiences like cloud gaming or virtual and augmented reality. The deployment of wireless 5G technologies will be critical to enable the Visual Cloud to grow.

           

          1. New End User Devices. – Most of these advanced experiences don’t rely solely on the cloud. For both content capture and consumption, devices need to evolve and improve. Device technologies like Intel® RealSense Technology’s depth images provide innovative visual information to applications that isn’t available from traditional devices. Consumption technologies and form factors like VR headsets are necessary to consume some experiences.

           

          1. Visual Computing Technologies. While many visual computing technologies like video encoding and decoding, raster and ray traced rendering have been around for many years, they have not been scaled to the cloud in any significant way. This process is just beginning. Other technologies, like the voxel 3D point clouds used by Replay Technologies, are just emerging. Advanced technologies like 3D Scene Reconstruction and Videogrammetry have several years to reach the mainstream.

           

          1. Deep Learning. Computer vision, image recognition, and video object identification have long depended on model based technologies like HOG. While these technologies have had some limited use, in the last couple of years, deep learning for image and video recognition– using neural networks to classify objects in image and video content as emerged as one of the most significant new technologies in many years.

           

          If you’re interested in learning more about emerging workloads in the data center that are being made possible by the Visual Cloud, you can watch our latest edition of the Under the Hood video series or check out our Chip Chat podcasts recorded live at the 2016 NAB Show. Much more information about Intel’s role in the Visual Cloud can be found at www.intel.com/visualcloud.

          Read more >

          Intel IoT Gears Up for Hannover Messe with Intelligent Industrial Solutions

          Internet of Things (IoT) is driving a new industrial IoT (IIOT) revolution. With the availability of low-cost sensors and high demand for optimizing complex manufacturing processes, more companies are adopting IoT technologies. In fact, a recent McKinsey & Company study … Read more >

          The post Intel IoT Gears Up for Hannover Messe with Intelligent Industrial Solutions appeared first on IoT@Intel.

          Read more >

          Intel joins industry letter supporting IANA transition plan

          By Audrey Plonk, global security and Internet policy specialist at Intel This week, Intel joined other companies and trade associations in reiterating our support for the current proposal to transition stewardship of the Internet Assigned Numbers Authority (IANA) to the global multistakeholder … Read more >

          The post Intel joins industry letter supporting IANA transition plan appeared first on Policy@Intel.

          Read more >

          The Rise of Next Generation Sequencing

           

          Following Bio-IT World, we’re asking some of the world’s top researchers how next generation sequencing (NGS) benefits them and their work. Today, we catch up with Mayo Clinic expert David I Smith, Ph. D, who says NGS allows him to ask and answer questions in a surprisingly short amount of time. The real value, he points out, is that sequencing gives researchers the ability to look at trillions of molecules to see what is happening to populations and move research discoveries, particularly in cancer, forward.

           

          Watch the above video to learn more and discover how cancer treatments will be dramatically different five years from now, and what keeps Smith up at night when it comes to NGS.

          Read more >

          Life in the Fast Lane: Intel Automotive Security Workshop Participants Race Toward Autonomous Driving Safety

          Buckle up: the Automotive Security Review Board (ASRB) is soaring down the fast lane toward collaborations that are already providing researchers with new opportunities to improve automotive security products and connected car technology. In an effort to increase automotive exploration … Read more >

          The post Life in the Fast Lane: Intel Automotive Security Workshop Participants Race Toward Autonomous Driving Safety appeared first on IoT@Intel.

          Read more >

          Intel and Alan Turing Institute form Strategic Partnership to help solve Big Data Healthcare Challenges

          Realizing the potential in big data is a challenge we’re enthusiastically tackling head on here at Intel and a recently announced strategic partnership with the Alan Turing Institute (ATI) in the UK is just one example of where working with key partners can help us drive scientific and technological discoveries.

           

          We want to help turn the rapidly increasing volume of data into meaningful insights which will help solve global challenges across a number of areas, including health and life sciences. The ATI’s vision is an exciting proposition, and that is to be a national institute which supports the UK in becoming a world leader in data science, through:

           

          • Research into the fundamentals of algorithms for data science;
          • Training the next generation of researchers;
          • Addressing ways in which scientific advances can be taken into practice;
          • Collaborating with a range of public and private organizations.

           

          If you want the deep dive on the ATI’s forward looking vision, I’d highly recommend reading Andrew Blake’s (Institute Director) Alan Turing Institute Roadmap for Science and Innovation.

           

          Alan Turing is a name that is familiar to many of you I’m sure and as the person who many see as the founder of modern computer science we are delighted that new algorithms developed by the ATI will feed into the design of future generations of Intel® microprocessors. Intel will provide the ATI with world-class High Performance Computing solutions including Intel® Xeon®-based workstations, Intel Software tools and access to an Intel Data center cluster based on Intel® Xeon® and Intel® Xeon Phi™.

           

          People and Technology

          But great technology is just one part of the story of Intel’s strategic partnership with ATI, so I’m excited to tell you that we’re supporting the development of the next generation of data scientists too. Alongside hiring a number of talented individuals to work at the ATI we will be supporting the PhD and Research Fellow programme which will help fulfil one of the core aims of the Institute in helping to bridge the skills gap and place the UK in a strong global position in this sector.

           

          Solving the Big Data Challenges in Healthcare

          Analysis of big data has the potential to solve some of the biggest challenges in healthcare which will help us deliver better patient care, including All-in-One-Day personalized medicine, unlocking the value of electronic medical records through natural language processing and making sense of the ever-increasing data produced by wearables and sensors. It’s an exciting time and we’re eager to see where this fantastic strategic partnership between Intel and the Alan Turing Institute takes us in the coming years. I look forward to keeping you updated in future blogs.

           

          Read more >

          Revolutionizing Seismic Data Processing in Oil and Gas Exploration

          Industries around the world rely on new technologies to enhance their capabilities and competitive advantages. This is true for the oil and gas industry, where high-performance computing (HPC) is essential to discover new natural resources using exploration geophysics. As oil and gas exploration is competitive, risky, and very expensive, the accuracy and efficiency of seismic surveys using technology and HPC simulation is therefore critical to reduce business risk and operation cost. Imagine the business impact of reducing the timeframe of acquiring computation results of seismic simulation from 2 months to just a couple of days. I recently came across such a case with China National Offshore Oil Corporation (CNOOC).


          15868083_ml.jpg

          CNOOC is China’s largest offshore oil and gas producer whose business is focused on searching for large- and medium-sized oil and gas fields offshore. To find new oil and gas resources, CNOOC relies on seismic data from offshore acquisition vessels that record high-resolution echo from sound waves bouncing off the sea floor.

           

          As it gets deeper into its exploration activities, however, the size of CNOOC’s seismic data grows as well, with more projects and higher complexity. A single seismic project, for example, may involve over 100TB of data. CNOOC needed a more efficient, scalable, and high-performing storage system to manage and transfer large amount of data as well as ensure efficiency in collecting, recording, processing, and interpretation of seismic data.

           

          To solve its data storage woes and enhance its oil and gas exploration capabilities, CNOOC evaluated different options before deciding to deploy a solution based on open-source software and industry-standard high-volume servers. It built large storage clusters with Intel® Enterprise Edition for Lustre* software, Intel® Xeon® processor E5-2600 v3 product family-based servers, and Intel Server Adapter X520 family, which provides 10Gb Ethernet connection for the storage cluster.

           

          Lustre* is an open-source, parallel file system designed for HPC needs. A Lustre* file system can be scaled to multiple storage clusters with thousands of storage nodes, which makes it very suitable for HPC applications and supercomputers. The Lustre*-based solution not only met CNOOC’s storage needs but also reduced cost of performance and capacity expansion compared to its old system. As the solution comes with Intel® Manager for Lustre* software, CNOOC found it easier to facilitate Lustre* deployment, expansion, and management.

           

          Thanks to this solution, CNOOC improved storage performance by 4.4 times while allowing it to leverage a unified storage service that simplified and centralized services for HPC projects and increased utilization rates of storage facilities and network bandwidth. Computing results that used to come out in 2 months can now be obtained in just a matter of days.

           

          If you want to know in detail how CNOOC enhanced its storage system for improved seismic data processing, you can read the complete case study here.

          Read more >

          Extreme-Scale Computing for All in One Day and Beyond

          Intel is driving toward a day when cancer patients routinely have their tumor DNA sequenced and receive precision treatment plans based on their unique biomolecular profile—all within 24 hours. We call this vision All in One Day, and we believe that with the right blend of industry-wide commitment, innovation, and collaboration, we can deliver on that vision in 2020.

           

          All in One Day isn’t an endpoint, though. I believe it’s also a step toward a world in which life science researchers use ultra-sophisticated 3-D models to simulate the workings of the human body and predict health outcomes. As Dr. Jason Paragas, director of innovation at Lawrence Livermore National Lab, likes to say, “We’d never ask an engineer to build a bridge or design an airplane without modeling how it’s going to perform in the real world. But doctors do the equivalent every day.”

           

          If we can empower researchers with advanced biomedical models and simulations, we stand to transform the practice of medicine. Building on the genomics revolution, we may be able to take much more guesswork out of medicine and dramatically expand the universe of available diagnostics, treatments, and preventive approaches.

           

          It’s going to take massive increases in computing performance to support these breakthroughs. In the United States, the President’s National Strategic Computing Initiative (NSCI) aims to advance the technologies needed for computers that are 100 times more powerful than today’s most capable supercomputers. Other nations are moving forward with similar initiatives.

           

          I recently worked with two of my HPC colleagues to develop a whitepaper that explores precision medicine and discusses Intel’s role in enabling it.

           

          We talk about the central role of Intel® Scalable System Framework and its ability to support the convergence of HPC modeling/simulation, health analytics, machine learning, and visualization that precision medicine will require.

           

          We touch on key technology innovations as well as collaborations with life science leaders to create open source platforms, tools, applications, and algorithms for precision medicine.

           

          And we note that the advances provided by these extreme-scale computers will help us address critical challenges like climate change and renewable energy sources as well as enabling progress toward predictive biology and precision medicine.

           

          I hope you’ll read the whitepaper and share your thoughts. What opportunities do you see for life sciences computing to transform biomedicine?  What roadblocks are in the way?

                                                                      

          Read more >

          Helping Healthcare Organizations Better Understand Their Breach Security Maturity

          Ransomware, it’s a word I’m seeing with increasing frequency amongst security experts. And it’s one I’m keen to let others know about within healthcare because the dangers are already having a major impact on organisations in health and life sciences. A couple of months ago it was reported that a hospital in Germany suffered a security breach which led to all Electronic Medical Records being locked in what at first appeared to be a ransomware attack, with the hospital confirming that the malicious virus had been sent from an unknown source. Fortunately, in this case, the hospital added that no patient information had been accessed but they had not yet calculated the cost to the organisation in regaining access to the data.

           

          Ransomware Could Cripple The Ability to Deliver Care

          When you consider that a personal health record can be 10x to 20x more valuable to a criminal than an individual’s credit card information you begin to understand the scale and importance of mitigating a wide range of security breaches for healthcare originations. Breach types like ransomware compound unauthorized access to sensitive patient information, compromising the ability of healthcare providers to access this information and crippling their ability to deliver care. No organisation is immune from breaches.

           

          Security Workshop for Nordic Regions

          That’s why I’m excited to welcome security experts from Intel, including David Houlding, Intel’s Healthcare Privacy and Security Lead, to Sweden at the end of May 2016 for a workshop to help healthcare organisations gain a better understanding of their breach security maturity, and benchmark their priorities across 8 breach types including ransomware, as well as 42 breach security capabilities with the rest of the health and life sciences industry. The event is invite only but if you are interested in finding out more on behalf of your healthcare organization and potentially attending please do get in touch today.

           

          At the workshop, David will be talking through and helping organisations get the most out of the Security Maturity Model developed by Intel and a consortium of industry partners. It’s a fantastic resource and, no matter which country you are based in, I would recommend attending to help you and your organisation identify where your breach priorities or security capabilities fall short of the industry and established best practices, which will enable you to make more informed decisions about where and how to invest future security spending.

           

          The Cost Of Under-Investment In Security

          There is, of course, a cost to not investing in security too. In Sweden, I have seen an example of the cost to a healthcare organization which suffered a ransomware attack. An infected file was opened from a webmail application while a doctor was connected to the hospital network. The malware began encrypting local files and those stored on the network, which included patient data from connected health centres outside of the hospital. Additionally, there was also a .txt file containing a ransom note.

           

          Fortunately, the IT support team noticed the attack within 90 minutes and were able to successfully stop backups of the infected data and close down unauthorized access to the network. After many hours of work to rectify the breach, network access was restored some 22 hours after the initial attack. I estimate that the cost in IT resource time alone was somewhere in the region of 20,000 Swedish Krona, which equates to approximately $2,500 or €2,200. The cost in time lost by clinicians having to use workarounds and the potential loss had personal data got into the wrong hands would be multiples of this figure.

           

          Learnings From Healthcare Security Breaches

          I’m always keen to understand what lessons can be learned from security breaches such as that explained above, because only then can we start to win the battle against these cyberattacks and keep patient data safe and secure. Intel’s Security Maturity Model is a huge step forward in helping healthcare organisations better understand where they are today and where they need to go in order to mitigate the risks of a breach. This is why I’m delighted that our workshop at the end of May will bring together healthcare organisations and Intel security experts here in Sweden to share their knowledge.

           

          – Contact the author: Johan Liden

          – Security Workshop, Sweden, May 31st – June 1st: Register your interest

          – Intel Health and Life Sciences: Security and Privacy

          Read more >

          Machine Learning: A Full Stack View

          groundbreaking.jpg

           

          According to Gartner’s Hype Cycle, machine learning is the hot new trend in the technology industry.  Why all the hype and excitement around artificial intelligence, big data, machine learning and deep learning?  As many of us in the industry know, machine learning and neural networks are certainly nothing new.  The buzz however this time around is being driven by the confluence of multiple factors including bigger data (and more importantly, labeled data), advances in scale compute, algorithmic innovation – and most importantly, killer apps that can take advantage of a data explosion.  In a world with billions (and in the near future, tens of billions) of connected devices, the amount of unstructured data that is collected by large organizations has quickly become unmanageable by traditional analytical techniques. Machine learning (ML), and its related branch, deep learning (DL), provide excellent approaches to structuring massive data sets to generate insights and enable monetization opportunities.

           

          Generally speaking, machine learning is a set of algorithms that learn from data.  However, ML these days isn’t your father’s simple regression technique that might have worked well on smaller data sets.  The explosion of unstructured data requires new algorithms to process it, and the ML/DL ecosystem is evolving quickly in support.  From a Deep Learning perspective, a great example of this is the recent Microsoft Research Imagenet winner, the 152 layer Residual Net.  This massive neural network has an amazing amount of representational power and actually outperforms human level performance on many visual recognition tasks.

           

           

          These types of algorithms actually perform better the more data is consumed making them a perfect match for the unending amount of data created today, assuming of course we can efficiently annotate it. From an application perspective, ML is not limited to ImageNet and object recognition.  It has already changed the way we shop at websites like Amazon and the way we are entertained by services like Netflix. ML is also being leveraged by cyber security applications to adapt quickly to threats and financial services institutions for highly accurate fraud or insider trading detection.

           

          To quote Sundar Pichai at Google, “”Machine learning is a core, transformative way by which we’re re-thinking how we’re doing everything.“

           

          Because of this, Intel is investing heavily to enable the industry by providing a full stack solution for everything from highly scalable and efficient hardware to tools and libraries that will ease development and deployment of machine learning models into applications.

           

          Starting at the lowest level, Intel is optimizing its hardware to target the highest single-node and cluster level performance including compute, memory, networking, and storage. This work builds on the capabilities of the Intel® Xeon® and Intel® Xeon Phi™ processor families, Intel® Solid-State Drives, new 3D XPoint memory technology, and Intel Omni-Path Architecture.  Our Intel Scalable System Framework (Intel SSF) configurations are designed to balance these technologies and efficiently and reliably scale to increasingly larger data sets.

           

          Moving up the next level of the stack, a set of highly tuned and optimized libraries are required to truly extract maximum performance out of the hardware.  Enhancements and additions are being made to the Intel Math Kernel Library, which provides a set of tuned math primitives, and the Intel Data Analytics Acceleration Library, which optimizes and distributes a broad set of machine learning algorithms.  These libraries also abstract the complexity of the underlying hardware and instruction set architecture (ISA) providing a level of programming that is comfortable for most developers while still highly performant. In addition to enhancing the libraries themselves, we are actively integrating with and contributing code back to key open source projects that are influential in machine learning.  This includes seminal projects like Caffe from UC-Berkeley, the Apache-Spark project, Theano from the University of Montreal, Torch7 which is used by Facebook and Twitter and others like Microsoft’s CNTK and Google’s Tensor Flow.

           

          On an even broader front, Intel is accelerating enterprises and application developers looking to use Machine Learning through the open source Trusted Analytics Platform (TAP) project which provides everything from big data infrastructure and cluster management tools to model development and training and application development and deployment resources. To further reduce friction for developers, TAP works with or is pre-integrated with popular frameworks and toolkits such as Spark-MLLib, H20, DL4J from Skymind and DataRobot to name a few.

           

          For a deeper dive into Intel’s strategy, libraries and recent customer activities in the machine learning space, you can explore the slides from a machine learning session at the recent 2016 Intel Developer Forum in Shenzhen China. You can also access the video from a talk I gave in late March at the 2016 Hadoop + Strata World in San Jose.

           

          Please stay tuned for more announcements and initiatives throughout 2016 from Intel regarding machine learning!

          Read more >

          Risks of ‘Unsubscribing’ from Unwanted Email

          Unsubscribe email risks.jpg

          We all receive loads of unwanted email solicitations, warnings, and advertisements.  It can be overwhelming to the point of being obnoxious.  Some days it feels like an unending barrage of distracting deliveries which requires a constant scrubbing of my inbox. 

           

          Beyond being frustrating, there are risks.  In addition to the desired and legitimate uses of email, there are several shady and downright malicious uses.  Email is a very popular method for unscrupulous marketers, cyber criminals, and online threats to conduct social engineering types of attacks.  Spam, phishing, and fraud are common.  Additionally, many attackers seeking to install malware will use email as a delivery mechanism.  Electronic mail can be an invasive communication mechanism, so care must be taken. 

           

          Unfortunately, like most people, I tend to make my own situation even worse.  In my professional role, I devour a tremendous amount of industry data, news, and reports to keep on the pulse of change for technology and security.  This usually requires me to ‘register’ or provide my email address before I get a ‘free’ copy of some analysis I desire.  I could just give a false email, but that would not be ethical in a business environment.  It is a reasonable and expected trade, where both parties benefit.  I get the information I seek and some company gets a shot at trying to sell me something.  Fair enough, so I suffer and give my real work email.  In this tacit game, there is an escape clause.  I can request to no longer be contacted with solicitations after the first email lands in my inbox.  Sounds simple, but it is not always that easy.

           

          The reality is I receive email from many more organizations than I ‘register’ with.  Which means someone is distributing my electronic address to many others.  They in turn repeat and now the tsunami surging into my inbox gains strength.  I become a target of less-than-ethical marketers, cyber attackers, and a whole lot of mundane legitimate businesses just trying to reach new customers.

           

          Some include an ‘unsubscribe’ link at the bottom which holds an appealing lure of curbing the flood of email destined for the trash anyways.  But be careful.  Things are not always as they seem.  While attempting to reduce the load in your inbox, it might actually increase the amount of spam, and worst case you could be infecting your system with malware by clicking that link.  Choose wisely!

           

          Recommendations for using ‘unsubscribe’:

          Rule #1: If it is a legitimate company sending the email, use the ‘unsubscribe’ option.

          Make sure the link points back to a domain associated with the purported sender.  Legit companies or their marketing vendor proxy will usually honor the request.

           

          Rule #2: If it is a shady company do not ‘unsubscribe’, just delete.

          If your mail service supports it, setup a BLOCK or SPAM rule to automatically filter future messages for these.

           

          If it is seriously malicious, the ‘unsubscribe’ link may take you to a site preconfigured to infect or compromise your system.  This is just another way bad guys get people to click on embedded email links.  DON’T FALL FOR IT!  It may result in a possible malware infection or system compromise.

           

          If it is semi-malicious, like a spam monster who will send mail to any address they can find, then clicking the ‘unsubscribe’ link actually tells them this is a valid email address where someone is reading the mail.  Which is valuable for them to know as they can sell that email address as ‘validated’ to others and use it for future campaigns.  End result: more spam.

           

          Rule #3: Some spam and solicitations don’t offer any ‘unsubscribe’ option.

          Just delete.  Probably not a professional company you want to patronize anyways.

           

          If you are in a work environment, be sure to know and follow your corporate policies regarding undesired email.  Many companies have security tools which can inspect, validate, or block bad messages.  Additionally, they may have solutions which leverage employees reporting of bad email to better tune such protections. 

           

          Just remember, if you are not sure the email is legit; don’t open or click anything, and NEVER open any attachments, including PDFs, office documents, HTML files, or any executables.  Only open attachments from trusted sources as they can be used by attackers to deliver Trojans which may infect your system with malware, ransomware, or other remote manipulation tools.  Cybercriminals often look like real companies with real products.  Make email life easier by ‘unsubscribing’ with care and necessary forethought.

           

           

          Interested in more?  Follow me on Twitter (@Matt_Rosenquist) and LinkedIn to hear insights and what is going on in cybersecurity.

          Read more >