Jump to content

VerdinaNET

Members
  • Posts

    55
  • Joined

  • Last visited

Everything posted by VerdinaNET

  1. While I was reading some articles in the net I was wondering what OS do you prefer for your computer. According to you Mac, Linux or Windows is the best?
  2. Just decided to say hello to all of you even I'm a member from some months ago. I found very usefull threads here and I look forward to communicating with everyone in the gamers talk & reviews community.
  3. What type of game is your favourite and do you prefer playing games alone or with friends?
  4. Hello, Ssilvester. We are so glad to hear that we've been recommended to you. Choose us! We guarantee 24/7 responsive, kind and most experienced team who will prodide you with the best solution for your traffic, budget, issue (if you're a customer already) etc. Apart from that we got enterprise dell hardware with high secure and reliable quantities. Good luck with your business!
  5. Back in the '90s, when you couldn't traverse through a college campus without the violent echoes of heavy metal accompanied by the anxious clicks of a Doom death match, Microsoft reigned king over PC gaming. At the time, using an operating system other than MS-DOS was today's equivalent of using a controller to play a twitch shooter on a luxurious custom rig. Times have changed since 1993. We no longer have to worry about JNCOs coming back in style or John Romero making anyone his bitch. Instead, we're welcomed with myriad options through which to play our games. With Linux finally emerging viable in the market, thanks in part to Valve's Steam Machine initiative, no longer is Windows, or MS-DOS for that matter, the indisputable king of PC gaming. Taking on the king With Valve's Steam client having gained traction as the most popular pick for digital PC game marketplaces, it's worth noting that the company co-founder, Gabe Newell, boldly claimed as far back as 2013 that "Linux and open source are the future of gaming." In fact, the lord and savior himself called Windows 8 "a catastrophe for PC gaming" little more than a year earlier. But why? As of September 2015, only 1,500 Steam games were compatible natively with the entire range of Linux distributions. Meanwhile, Windows thrives on a 6,464 title count, more than four times as much as Linux, according toPhoronix. That number doesn't even include the number of games exclusive to the Universal Windows Platform. Many of the most-played Steam games, such as Dark Souls III, Grand Theft Auto V, and Rocket League still aren't available on Linux, and some may never be. It's obvious then why PC World reported earlier this year that not only did Linux users account for less than 1 percent of Steam users at 0.91% in February 2016, but that was actually a dismal 0.04% decrease from the month prior. Newell said all the way back in 2012 that Valve wanted to make 2,500 games available on Steam for Linux. Even after launching its own Linux kernel specifically geared towards gaming, the company has still failed to deliver on that promise nearly four years later. SteamOS: holding Linux gaming back There are undoubtedly perks to using Linux. Unlike Microsoft's Windows and Apple's Mac OS X, the open-source operating system is available in a number of distributions, or distros, each marked with a unique set of benefits. Among these distros is Valve's own SteamOS, a proposition that would ostensibly bring PC gaming to the living room. And, had it been more than an Ubuntu port stripped of everything but Steam's Big Picture Mode, maybe, just maybe, it would have stood a chance at normalizing PC gaming in your family's home entertainment center. But, alas, there was nothing new to see there. Truth be told, SteamOS wasn't only limited in its functionality. Some hardware makers didn't even release their Steam Machines because of the sheer performance issues they were running into with Valve's custom operating system. Falcon Northwest, for example, told Digital Trends last year that SteamOS "doesn't support some common functions that you'd expect from an operating system." These issues simply wouldn't persist in a Windows environment. What's more, you can easily get the superior performance of Windows paired with the accessibility of SteamOS in as few steps as configuring Steam to launch at system startup while simultaneously getting the service to open in Big Picture Mode by default. Linux is out of control While there are some redeeming qualities about Linux, advanced customization and affordability aren't really factors when you're playing video games on what is most likely a $1,000+ hardware setup. What is important to take into consideration is your control input. With Windows, you're faced with a plethora of choices when it comes to controls – and that's exactly what PC gamers like, right? Options. You can get a mechanical keyboard with Cherry MX Red switches or Cherry MX Brown; you can get you mouse wired or wireless, and you can even choose between a PS4 orXbox One controller. With Linux, though, many of these options aren't natively supported. Sure, you can find a complicated workaround to use an Xbox One controller with your Ubuntu-equipped PC, or you could shell out a hundred bucks or so for Windows 10, where using an Xbox controller requires nothing more than plugging it into an open USB port. Of course, with Valve's proprietary Steam controller having arrived by the tail-end of last year, you can be confident in Linux compatibility from a company trying to push its own kernel. But, at the same time, the Steam controller is an impressively designed compromise – an attempt to shift controller users as well as mouse and keyboard fanatics toward a middle ground. Unfortunately, the result is a niche appeal, if any at all. A catastrophe that's here to stay Evidently, Windows isn't going anywhere, with a Steam market share of nearly 96% as of March 2016. In fact, Epic Games even recently called Microsoft out ontrying to "monopolize" PC gaming with its Universal Windows Platform initiatives. Although Microsoft has lost sight of what PC gamers want in recent years (see:Games for Windows Live), there's no doubt that the current Xbox head Phil Spencer wants to bring the company back to its roots, namely by integrating features (and games) from the Xbox One into Windows 10. In contrast, Valve's attempts at making Linux not only the best place to play games from your Steam library, but actually the heart of your living room are tough to jive with. Despite making an effort with SteamOS, it doesn't help that a number of companies still haven't released their November 2015-bound Steam Machines after neglecting to comply with the operating system's handicaps. There's a clear winner here, and unless Linux rectifies its performance disparity, lack of natively supported control options and impoverished game library, the OS to beat for PC gaming will remain Windows 10. Source: http://www.techradar.com/news/
  6. SecureWorks is the first initial public stock offering of the technology industry this year. That may be the extent of the victory lap for the tech I.P.O. market, at least for now. In its first day of trading on Friday, shares of SecureWorks, a digital security company, have been hovering near the $14 price it set the night before. The stock opened on the Nasdaq market at $13.89. SecureWorks raised $112 million, selling eight million shares. It had been marketing nine million shares within the range of $15.50 to $17.50, indicating that demand was weaker than expected. The lackluster demand is not that surprising. For one thing, SecureWorks has little in common with so-called unicorns — those private, venture-backed start-ups with valuations above $1 billion that have been avoiding the public markets. SecureWorks is 17 years old, based in Georgia and owned by Dell. And the ways in which SecureWorks does resemble some unicorns — top-line revenue growth, a history of losses and an enterprise-software business model — are not the most encouraging for investors. Recent trading among already public security stocks did not help SecureWorks’ deal. Shares of FireEye and Rapid7 declined in recent weeks as SecureWorks was meeting with potential buyers of its stock. Investors look to companies similar to the one going public when trying to determine what price they might be willing to pay for the I.P.O. When the so-called comparables slip, it can be a bad sign for the debutant. For Dell, pricing below the range was not necessarily bad news. The computer maker, which has agreed to acquire EMC in the largest technology deal ever, is not selling shares in the offering. Dell will own 86 percent of SecureWorks after the offering, and it is hoping the share price will rise in the public market. Dell will also control more than 98 percent of the voting power through a separate class of shares. The I.P.O. price yields a valuation of $1.1 billion, which is almost double the roughly $600 million Dell paid for the company in 2011, according to Triton Research, which provides information on private companies. SecureWorks said it might use the proceeds from the offering to develop new solutions or enhance current ones, and fund capital expenditures. Those funds may be necessary as competition in the security world increases. The company said in the filing that it expected “pricing pressures within the information security market to intensify as a result of action by our larger competitors to reduce the prices of their security monitoring, detection and prevention products and managed security services.” Still, the company has drawn quite a bit of revenue from its 4,200 customers. SecureWorks reported $339.5 million in total revenue for the year through Jan. 29, a 30 percent increase from the same period last year. SecureWorks had $72.4 million in losses for the year, almost twice as much as the same period in 2015. Bank of America, Morgan Stanley, Goldman Sachs and JPMorgan Chase are managing the offering. Source: http://www.nytimes.com/
  7. For the first time, scientists at IBM Research have demonstrated reliably storing 3 bits of data per cell using a relatively new memory technology known as phase-change memory (PCM). The current memory landscape spans from venerable DRAM to hard disk drives to ubiquitous flash. But in the last several years PCM has attracted the industry's attention as a potential universal memory technology based on its combination of read/write speed, endurance, non-volatility and density. For example, PCM doesn't lose data when powered off, unlike DRAM, and the technology can endure at least 10 million write cycles, compared to an average flash USB stick, which tops out at 3,000 write cycles. This research breakthrough provides fast and easy storage to capture the exponential growth of data from mobile devices and the Internet of Things. Applications IBM scientists envision standalone PCM as well as hybrid applications, which combine PCM and flash storage together, with PCM as an extremely fast cache. For example, a mobile phone's operating system could be stored in PCM, enabling the phone to launch in a few seconds. In the enterprise space, entire databases could be stored in PCM for blazing fast query processing for time-critical online applications, such as financial transactions. Machine learning algorithms using large datasets will also see a speed boost by reducing the latency overhead when reading the data between iterations. How PCM Works PCM materials exhibit two stable states, the amorphous (without a clearly defined structure) and crystalline (with structure) phases, of low and high electrical conductivity, respectively. To store a '0' or a '1', known as bits, on a PCM cell, a high or medium electrical current is applied to the material. A '0' can be programmed to be written in the amorphous phase or a '1' in the crystalline phase, or vice versa. Then to read the bit back, a low voltage is applied. This is how re-writable Blue-ray Discs store videos. Previously scientists at IBM and other institutes have successfully demonstrated the ability to store 1 bit per cell in PCM, but today at the IEEE International Memory Workshop in Paris, IBM scientists are presenting, for the first time, successfully storing 3 bits per cell in a 64k-cell array at elevated temperatures and after 1 million endurance cycles. "Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry," said Dr. Haris Pozidis, an author of the paper and the manager of non-volatile memory research at IBM Research - Zurich. "Reaching three bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash." To achieve multi-bit storage IBM scientists have developed two innovative enabling technologies: a set of drift-immune cell-state metrics and drift-tolerant coding and detection schemes. More specifically, the new cell-state metrics measure a physical property of the PCM cell that remains stable over time, and are thus insensitive to drift, which affects the stability of the cell's electrical conductivity with time. To provide additional robustness of the stored data in a cell over ambient temperature fluctuations a novel coding and detection scheme is employed. This scheme adaptively modifies the level thresholds that are used to detect the cell's stored data so that they follow variations due to temperature change. As a result, the cell state can be read reliably over long time periods after the memory is programmed, thus offering non-volatility. "Combined these advancements address the key challenges of multi-bit PCM, including drift, variability, temperature sensitivity and endurance cycling," said Dr. Evangelos Eleftheriou, IBM Fellow. The experimental multi-bit PCM chip used by IBM scientists is connected to a standard integrated circuit board. The chip consists of a 2 × 2 Mcell array with a 4- bank interleaved architecture. The memory array size is 2 × 1000 μm × 800 μm. The PCM cells are based on doped-chalcogenide alloy and were integrated into the prototype chip serving as a characterization vehicle in 90 nm CMOS baseline technology. Source: http://phys.org/technology-news/
  8. An operating system is only useful as it is customizable. After all -- if you can't make an OS look and act the way you want it to, then who cares if it's objectively better (faster, more powerful) than any other OS? Good news, Windows 10 users: You can easily customize both the look and feel of Microsoft's new OS, and make it work for you. Here's our guide on how to make Windows 10 pretty andeasy to use. Windows 10 Settings menu: The Personalization tab: A look at Windows 10's new Personalization settings -- the bedrock of all your visual customization needs. How to customize the Windows 10 Start menu: We were all excited about the return of the Start menu -- even though it's more like a hybrid of the Windows 8 Start screen and the Windows 7 Start menu rather than a traditional Start menu. Here's how to make it yours. Pin links to the Start menu from any browser: Put your favorite links on the Start menu, no matter what web browser you favor. 10 ways to customize the taskbar in Windows 10: If you're like me, the taskbar -- not the Start menu -- is the real workhorse in Windows 10. How to disable the Windows 10 lock screen: Windows 10 is designed for every device, including mobile devices, which is why it has a lock screen and a log-in screen. But for many of us desktop and laptop users, the lock screen is redundant and inconvenient. Here's how to get rid of it. Make people jealous of your lock screen with Windows Spotlight: If you must have a lock screen, it might as well look awesome with pretty, high-res photos from Windows Spotlight feature. How to uninstall default apps in Windows 10: Windows 10's default apps don't take up a lot of space, but do visually clutter up your Start menu (especially if you don't use them). Here's how to uninstall them (and how to reinstall them). 3 ways to customize Microsoft Edge: Microsoft's new Edge browser is a work-in-progress, but here's what you can do right now to make it look pretty. How to get the home button back in Edge: Edge has decided to take a leaf out of Google Chrome's book and be too cool for a home button. But some of us like home buttons, so here's how to get it back. Here's how to get rid of Internet Explorer: Edge is such a work in progress that Windows 10 also ships with Internet Explorer 11. You can't really uninstall IE11, but you can hide it so you don't have to look at it. Force Cortana to use Google instead of Bing: Make Microsoft's virtual assistant use the Web search engine of your choice. Get rid of default cloud service icons in File Explorer: Cloud storage service icons show up in the left menu of your File Explorer, whether you want them to or not. But you can remove them with a relatively simple Registry hack. How to change your computer's name in Windows 10: What's customization if you can't customize your PC's name? Source : http://www.cnet.com/
  9. It looks interesting and I thing it definitely be entertaining for gamers.
  10. Microsoft has announced that Windows 10 has now been installed on 300 million active devices worldwide. Redmond's newest operating system hit the 200 million milestone on January 4, and if you go back a little further to October 6 last year, that was when the company announced Windows 10 had reached the 110 million mark. So from October to January, a period of three months, 90 million people adopted the OS, and from January to May, in other words four months, 100 million people have taken the plunge. A quick bit of rough napkin maths reveals that the average pace of adoption has thus slowed a little, from around 30 million people per month at the tail end of 2015, to 25 million people per month this year. However, there's every chance it will speed up again, as the end of July deadline for a free upgrade from Windows 7/8.1 looms. That will force the hand of those who have been sitting on the fence, or just plain haven't been bothered to deal with the hassle of making the move, as they need to click the upgrade button soon or potentially end up paying for the OS at a later date. Time is running out And indeed with its blog post announcement of the 300 million milestone, Microsoft took the opportunity to remind folks that "time is running out" on the freebie offer, and those who don't upgrade before or on July 29 will have to pay $119 (around £82) for Windows 10 Home. Whether Windows users need a reminder is just a tad debatable though, given the amount of badgering those on Windows 7/8.1 have been on the receiving end of since the launch of the newest version of Windows. Microsoft also shared a number of fun facts – well, facts, anyway – and stats gleaned from Windows 10 users (another ripe source of controversy). These include the fact that Cortana on Windows 10 has now answered over 6 billion queries, and over 63 billion minutes were spent surfing with Microsoft Edge in March, which represents a 50% growth in usage time since the last quarter. Windows 10 users are also playing more games than ever, Microsoft notes, with over 9 billion hours of gameplay having been racked up since the OS was launched (with DX12 support, of course). Source: http://www.techradar.com/news/
  11. A mediator between particles of light: An organic molecule mediates the interaction between a control and a probe beam, which are indicated by the magenta or the green spheres in the foreground. Here the energy of the two light beams changes when they leave the molecule. This is represented symbolically by the yellow and the blue sphere in this illustration. Researchers show that a single molecule allows a beam of light with a few photons to be controlled – a step towards the photonic computer. The Jedi knights of the Star Wars saga are engaged in an impossible fight. This does not result from the superiority of the enemy empire, but from physics because laser swords cannot be used for fighting like metallic blades: beams of light don’t feel each other. Until now, for a light beam to perceive another one, it has required a large chunk of material as intermediary, and very intense light. A team at the Max Planck Institute for the Science of Light has demonstrated for the first time a mediation process with only a single organic molecule and just a handful of photons. The researchers influence and switch another light beam with these particles of light. This basic experiment not only promises a place in physics textbooks, but it may also help in the development of nano-optical transistors for a photonic computer. Currently, the future of the computer industry is unclear. Semiconductor components like the transistor cannot be miniaturized indefinitely and run at ever-higher speeds. One possibility for developing more compact and powerful computers could result from processing information with photons instead of electrons. That is a major objective of photonics. However, there is a fundamental problem in the attempt to develop a purely optical transistor: “Light cannot simply be switched by other light in the way that electric current is switched with current in a conventional transistor”, explains Vahid Sandoghdar, Director of the Nano-optics Division at the Max Planck Institute for the Science of Light and Alexander von Humboldt Professor of the Friedrich-Alexander University, Erlangen-Nürnberg. How shy particles of light are becomes obvious when one crosses the beams of two torches or two lasers. What happens is: nothing. “A medium is required to mediate the light-light interaction” A control beam alters the optical properties of the molecule Now Vahid Sandoghar’s team has succeeded in controlling light with a single organic molecule and just a handful of photons. To this effect, the researchers first cooled molecules which they had embedded in a solid matrix to minus 272 degrees Celsius. With the help of modern microscopy and spectroscopy techniques, they made two carefully focused laser beams overlap on a single molecule: a so-called control beam and a probe beam, which should be switched. “The control beam has the task of changing the optical properties of the molecule so that it becomes transparent for the second one, the probe beam”, explains Andreas Maser, who performed the experiments as part of his doctoral thesis. Previously, powerful laser beams and macroscopic materials were needed to switch light with light, as this process relies on an interaction which physicists call non-linear. In such non-linear interactions the optical properties of a material also depend on the light intensity and not just on the intrinsic material. In addition, non-linear interactions are much weaker than the normal linear interaction. This results from the reduced ability of the electrons in the molecule to follow the electric field vibrations of the light waves at different frequencies. Just a single photon should be able to switch the molecule Now, the Erlangen-based researchers were able to switch the probe beam with just a few light particles, as they conducted their experiment at a temperature close to absolute zero. “At very low temperatures the interaction cross-section of the molecule becomes a multiple of its geometrical size”, explains Benjamin Gmeiner, who also played a key role in the experiments. So the molecule becomes something like an illusionary giant, with the result that almost every photon of the control beam can interact with the molecule. “Therefore, just a few photons from the laser beam are enough to alter the optical properties of the molecule.” The researchers are even convinced that the control pulse can be weakened still further. “In principle, a single photon should be enough to alter the fate of a second photon”, says Vahid Sandoghdar. The researchers will now continue to work on controlling a light signal with individual photons. Simultaneously, the team in Erlangen is focusing rather on the practical side of things: the researchers would like to embed the molecule as a nano-optical transistor in a photonic wave-guide structure that should serve to wire up many molecules as is common in electrical circuitry. This would be an important step towards the future perspective of processing information in a photonic computer. Source: http://scitechdaily.com/
  12. Pricing in ads for broadband Internet access is too often misleading and needs tighter regulation. That's the verdict of the U.K.'s Advertising Standards Authority, which on Wednesday gave ISPs six months to clean up their act before it introduces new rules on how they can promote their services. The monthly cost of broadband Internet access bundled with fixed-line telephone service ought to be simple enough to determine. However, after viewing a typical ad, only 23 percent of people could correctly identify the cost in a study by the ASA and the U.K.'s communications regulator, Ofcom. By presenting the cost of broadband service and line rental separately, and giving undue prominence to limited time introductory offers, contract length and one-off costs, ISPs are able to disguise the true cos of their service. It's not just a problem in the U.K., either: ISPs in France too display introductory prices that are only valid for six or twelve months, and often only a third or less of the price once the offer expires. They bury the duration of the offer and the full price in much smaller type, or even hide it altogether in online presentations, requiring suspicious customers to click or mouseover to reveal it. The ASA-Ofcom study found that 22 percent of participants still couldn't work out the total cost of service per month even after being prompted to review the ad, and 81 percent were unable to calculate the lifetime cost of a broadband contract. From Oct. 31, the ASA will require advertisers to show all-inclusive up-front and monthly costs, without separating out line rental. It also wants them to give greater prominence to the minimum contract length, the cost after initial discounts expire, and any up-front costs that might eat into the headline discounts. Unfortunately for broadband customers, advertisers risk little if they flout the ASA's rules. The strongest penalty it can impose is to require that the offending ad be withdrawn, making enforcement a giant game of Whac-a-Mole as advertisers can push out a different creative and start again. Source: http://www.pcworld.com/news/?start=15
  13. In March, Opera added native ad blocking to a developer edition of its browser. Now, the company has pushed that feature into general release, dramatically decreasing the load time of web pages at the expense of the advertising revenues that would normally be driven to the site. Native ad blocking is available both as a browser for Windows PCs and the Opera Mini browser for Android. Blocking ads not only speeds up the overall browsing experience, according to Opera, but can also eliminate a significant chunk of data that must be downloaded by smartphone users to view a Web page. An Opera spokesman said users could expect roughly the same performance from the stable version as they'd experienced in the developer build, with pages loading up to 90 percent faster than with ads enabled. Opera also went a step further, claiming that building native blocking into Opera made the browser about 45 percent faster than the stable version of Google's Chrome browser with AdBlock Plus (a third-party ad blocker) integrated. “Our goal is to provide the fastest and the smoothest online experience for our users,” Krystian Kolondra, the senior vice president in charge of engineering for Opera, said previously. “While working on that, we have discovered that a lot more time is spent on handling ads and trackers than we thought earlier.” Why this matters: In many ways, ad-blocking browsers like Opera represent the Napster of online journalism: a convenient, efficient way to load web pages unencumbered by the scripts, tracking pixels, and banner ads that can result in a bumpy experience. With the demise of print subscriptions, however, publishers will inevitably turn to other means of raising revenue—including preventing a web page from being loaded if a user has an ad blocker enabled. It ain't over yet. How to turn off ads in Opera In Opera Mini for Android, ads can be turned off by tapping the O menu, then toggling ads either on or off. Opera said Opera Mini will allows ads to be blocked in both the high- and extreme-saving modes. Likewise, ads can be turned off in the desktop version of Opera either from the Settings menu, or else from a popup that should appear when the first page is loaded. "Whitelisting" a site can be performed by clicking the shield icon, which turns ad blocking or or off. As in the developer build, you'll be able to see how many ads you've blocked per page, and even load the page with ads turned on and off in a side-by-side comparison. The new build also adds two unrelated features that are worth checking out. The desktop version of Opera includes a video pop-out feature, which shunts a playing video to the side of your screen. Opera Mini also adds a feature to add web pages to your home screen, Opera said. In last year's tests, Opera already delivered the fastest Web browsing experience available, roughly equivalent to Google's Chrome. Now, with ad blocking turned on, Opera could very well surge to the front of the pack. Source: http://www.pcworld.com/news
  14. Computer scientists at the University of Adelaide have developed a sophisticated but easy-to-use online tool to help build people's trust in the cloud. Cloud computing is widely recognised as a highly useful technology, with multiple benefits such as huge data storage capabilities, computational power, lower costs for companies and individuals, simplicity of use, and flexibility of application. But the potential growth in the uptake of the cloud is being hampered by a major issue: people simply don't trust it. "Trust management is a top obstacle in cloud computing, and it's a challenging area of research," says the University's Professor Michael Sheng, ARC Future Fellow in the School of Computer Science. "There are many reasons why people lack faith in the cloud – there's little to no transparency, often you don't know who provides the service, and it's difficult at times for users to know whether certain cloud-based applications or sites are malicious or genuine," he says. For the past few years, Professor Sheng and his students have been developing a system known as Cloud Armor. Cloud Armor is aimed at showing which cloud sites, applications or providers are more trustworthy than others, offering a score out of 100. Professor Sheng says: "The basic concept behind this is like the website Rotten Tomatoes, which is widely used by people to review and rank films. But what happens when people are not being entirely honest in their views? "How do we cut through comments that are designed as a malicious and systematic attack against a product, and also those that are well-executed self-promotion? To be able to give consumers an accurate understanding of trustworthiness, we need to be able to sort through this false feedback." To do that, Cloud Armor relies on a "credibility model". An in-house-designed crawler engine scans all of the comments made on the internet about any aspect of the cloud, and the credibility model works out what feedback is credible and what isn't – such as certain statements that are repeated over and over, indicating potential false feedback. "We've tested this with and without our credibility model – without the model, some cloud applications receive a maximum score of 100; but with the model, that score might only get to 50 or 60," Professor Sheng says. "We're very proud of the work we've done on Cloud Armor. We've presented it at a number of top-tier conferences and several prestigious journals and already it's attracting a lot of attention from the international community. "I hope that through the use of a tool like this, it will help to create a culture of transparency in the cloud, and ultimately become more trustworthy to users," he says. Source: http://phys.org/technology-news/computer-sciences/
  15. Virtual reality, long the stuff of sci-fi movies and expensive, disappointing gaming systems, appears poised for a breakout. Facebook CEO Mark Zuckerberg spent $2 billion in 2014 to acquire Oculus VR and its Rift virtual-reality headsets. Google now sells a boxy cardboard viewer that lets users turn their smartphone screens into virtual- reality wonderlands for a mere $15. And YouTube just introduced live, 360-degree streaming video. There's a big barrier to the widespread use of this technology, though:Virtual reality often makes people sick. Virtual-reality sickness isn't a new problem. It's been known as long as test pilots, test drivers and potential astronauts have been practicing their skills in mock vehicles, though it was called simulator sickness in those cases. Not unlike motion sickness or seasickness, VR sickness has its roots in the mismatch between the visual and vestibular systems, said Jorge Serrador, a professor of pharmacology, physiology and neuroscience at Rutgers New Jersey Medical School. How VR sickness works Imagine standing below decks in a boat on choppy seas. The entire cabin is moving, so your eyes tell you you're standing still. But you feel the movement — up, down, pitching side to side. You start to feel clammy. Your head aches. You go pale and reach for a trash basket to retch into. The problem starts in the vestibular system, a series of fluid-filled canals and chambers in the inner ear. This system includes three semicircular canals, all lined with hair cells, so named for their hair-like projections into the liquid-filled channels. As the head moves, so too does the fluid in the canals, which in turn stimulates the hair cells. Because each canal is situated differently, each sends information on a different type of motion to the brain: up/down, side to side and degree of tilt. Connected to the semicircular canals is the utricle, a sac containing fluid and tiny calcium carbonate particles called otoliths. When the head moves, so too do the otoliths, sending the brain signals about horizontal movement. Next door, a chamber called the saccule uses a similar setup to detect vertical acceleration. This system typically works in tandem with the visual system and with the proprioceptive system, integrating sight and sensations from the muscles and joints to tell the brain where the body is in space. A virtual-reality environment hammers a wedge between these systems. Simulator sickness Unlike seasickness or car sickness, virtual-reality sickness doesn't require motion at all. It was first reported in 1957 in a helicopter-training simulator, according to a 1995 U.S. Army Research Institute report on the topic. One 1989 study found that as many as 40 percent of military pilots experienced some sickness during simulator training — an alarming number, according to the Army report, because military pilots are probably less likely than the general population to have problems with "motion" sickness. Because of simulator sickness, early simulator developers started to add motion to their models, creating plane simulators that actually pitched, rolled and moved up and down a bit. But sickness still occurs, according to the Army report, because the computer visualization and the simulator motion might not line up completely. Small lags between simulator visuals and motion remain a problem today, Serrador said. "You go into a simulator and [the movements] don't match exactly the same as they do in the real world," he said. "And all the sudden, what you'll find is you just don't feel right." Typically, the bigger the mismatch, the worse the sickness. In one 2003 study published in the journal Neuroscience Letters, Japanese researchers put people in a virtual-reality simulator and had them turn and move their heads. In some conditions, the VR screen would turn and twist twice as much as the person's actual head movement. Unsurprisingly, the people in those conditions reported feeling a lot sicker than those in conditions where the movement and the visual cues matched up. Combating the nauseating effects of VR No one really knows why vestibular and visual mismatches lead to feelings of nausea. One theory dating back to 1977 suggests that the body mistakes the confusion over the conflicting signals as a sign that it's ingested something toxic (since toxins can cause neurological confusion). To be on the safe side, it throws up. But there's little direct evidence for this theory. People have different levels of susceptibility to virtual-reality sickness, and they can also adapt to situations that initially turn them green around the gills. The Navy, for example, uses a swivel chair called the Barany chair to desensitize pilots to motion sickness. Over time, the brain figures out which cues to pay attention to and which to ignore, Serrador said. At some point, even the act of putting on a virtual reality headset will trigger the brain to go into a sort of virtual-reality mode, he said. "There's lots and lots of data that show that your brain will use the context cues around it to prepare itself," Serrador said. Virtual-reality developers are working to combat the nauseating side effects of their products. Oculus Rift, for example, boasts a souped-up refresh rate that helps prevent visual lags as the user navigates the virtual world. And Purdue University researchers invented a surprisingly simple fix: They stuck a cartoon nose (which they call the "nasum virtualis") in the visual display of a virtual-reality game. Their results, presented in March 2015 at the Game Developers Conference in San Francisco, showed that this fixed point helped people cope with virtual-reality sickness. In a slow-paced game in which players explored a Tuscan villa, the nose enabled users to go 94.2 seconds longer, on average, without feeling sick. People lasted 2 seconds longer in an almost intolerably nauseating roller-coaster game. The nose seems to give the brain a reference point to hang on to, said study researcher David Whittinghill, a professor of computer graphics technology at Purdue. "Our suspicion is that you have this stable object that your body is accustomed to tuning out, but it's still there and your sensory system knows it," Whittinghill said in a statement.
  16. For people with gluten allergies or celiac disease, the idea of eating out in restaurants can be terrifying. It typically involves scrutinizing menus and food labels, interrogating waiters, or having to bring their own meals wherever they go. But now, a discreet new device, small enough to fit into a pocket or purse, could make eating out an easier and safer experience for gluten-sensitive people. Manufactured by San Francisco-based startup 6SensorLabs, the portable gluten-testing device, called Nima, can test food for the presence of gluten, providing results within minutes and reducing people's food anxiety. The device could also provide greater social freedom, making meals more enjoyable, said 6SensorLabs co-founder and chief technology officer Scott Sundvor. "A lot of people who have food issues get very stressed when they're eating out, and they avoid eating out altogether," Sundvor told Live Science. "Our product will really enable them to start going out again and start being more open in social settings." An estimated one in 133 Americans, or about 1 percent of the population, is affected by celiac disease, an inherited autoimmune disease in which eating gluten can cause severe damage to the small intestine, according to the organization Beyond Celiac. There are currently no treatments or cures for celiac disease — except eating a diet without any gluten, which is a protein found in wheat, rye and barley. Using the Nima device, individuals can make sure their food is gluten-free by placing a tiny piece of their meal inside a disposable capsule, twisting the cap shut and inserting the capsule into the Nima's main sensor unit. Within 2 to 3 minutes, Nima will let users know if the food is safe to eat by displaying a smiley face on the screen if there is no gluten, or a frown if the result is positive for the protein, the company said. The device can test a range of foods, from soups and sauces to more solid items like baked and fried goods, Sundvor said. Using a combination of a chemical and mechanical process, the Nima grinds down any chunky bits, dissolving the food in a proprietary blend of enzymes and antibodies that zero in on any gluten in the mix. And Sundvor said those antibodies can detect levels of gluten as low as 20 parts per million, the FDA limit for the maximum level of gluten considered acceptable in foods that are labeled gluten-free. But the Nima itself is not an FDA-approved device. It is not intended for medical or diagnostic use, the company said. Instead, the Nima is marketed as a tool for getting more information about food when eating out, Sundvor said. "We're selling this as a device that can give another layer of data," Sundvor told Live Science. "This isn’t something that will help people treat their disease or diagnose gluten-sensitivity, and that's why we don't need FDA approval for the device." The Nima offers a portable alternative to current clunky, time-consuming food-testing kits on the market, Sundvor said. The device is 99.5 percent accurate, he said. That number is based on about 2,000 tests comparing the Nima's sensitivity to gluten in various foods to that of other consumer gluten tests currently on the market. Nima's results have also been validated by two different external labs: Bia Diagnostics and BioAssay Systems. And Sundvor said his company is making sure to get the device tested even more thoroughly by a third party before making the sensor available to the public later this year. There are still some challenges, though. Most importantly, the Nima can't guarantee that an entire meal will be free of gluten, because the tests only the portion of the meal that users place in their device, Sundvor said. If there is gluten in the salad dressing on the side of a meal, for example, and not in the crusted Parmesan chicken, the device could give a false negative if the chicken is the only part of the meal tested. The Nima avoids cross-contamination inside the device itself by using disposable capsules. This design also allows for potential expansion into capsules for other allergies later on, with the development of dairy and peanut allergy-testing capsules already underway, Sundvor said. Currently, users can pre-order a starter kit online, which consists of the main Nima sensor unit and three capsules, selling for $199. Refill packs of 12 capsules each will also be available on a subscription basis for $47.95 during the pre-sale. Once the device is available, in mid-2016, the company will also have a Nima app, in which users can log results and share their experiences at different restaurants, testing different foods, Sundvor said. "This is going to have a really big impact on people," he added. "It will bring more transparency to food in general and help people with their dietary issues." Source: http://www.livescience.com/technology/
  17. NEWS ANALYSIS: Augmented hearing gives you total control over the sounds of your environment. And it's coming to an ear canal near you. We live in an attention economy. Every Website, game, video, TV show, meme and social media post demands your attention. But success in this world is based in large part on your ability to direct your attention to productive tasks. Author Cal Newport calls "Deep Work" the secret to achieving great things. Researchers at the University of Illinois at Urbana-Champaign found that ambient background noise—the kind you'll encounter at a local coffee shop—measurably boosts creativity and productivity. And this idea definitely resonates with me. As a writer, I concentrate for a living. But as I'm crafting words in my head, I find the cognitive load vastly higher if two people in the room are having a conversation or if someone is talking on the phone. I also get distracted from annoying sounds outside—for example, a neighbor's dog spends much of the day barking. The solution for me always has been to play some kind of white noise or ambient music—some sound that's constant and pleasant and puts annoying sounds in the background. Or I to go work in a coffee shop, where I always work better than I do in an office, even a quiet one. The same goes for sleeping. I can sleep well when it's quiet. But as a digital nomad, I sometimes live in cities where the sounds of car horns and sirens and yelling can keep me up at night. Sometimes live in the country. I recently lived in Cuba, for example, and in a couple of AirBnB apartments outside of Havana, it felt like the roosters were going out of their way to prevent me from sleeping past 4 a.m. And again, white noise saved the day—or the dawn, in this case—which I often play from an iPad app and helps me sleep when it's noisy. But the whole white noise industry, from white-noise machines to white-noise apps, is about to get disrupted by something far better: Let's call it augmented hearing. Of Course, There's an App for That The past few years have seen the emergence of a smattering of cloud-based white noise sites that simulate coffee shops. Sites such as Coffitivity and Hipstersound let you not only choose the ambient sounds of a coffee shop, but also specific locations such as Brazil or Paris. A site called Coding gives you the sounds of a room full of developers writing code. There are many others. These sites are nice. But they're basically just recorded and looped sounds. The new world of augmented hearing will replace canned sounds with the digital processing of actual sounds. A free new app called Hear - advanced listening for iOS launched this week from Reality Jockey Ltd. The app offers augmented hearing in seven customizable varieties. You use it with your existing headphones or earbuds. I use it with my noise-canceling Bose headphones for maximum effect. Most of the Hear modes are gimmicky parlor tricks that accidentally simulate horror movie soundtracks or even drug experiences, according to tech blogs quoted on the Hear page. For example, the trippiest, most psychedelic mode is called "Office." It transforms every actual sound into a mesmerizing nightmarish drone sound and offsets it in time. A mode called "Happy" is by far the most bizarre. It takes ambient sounds and repeats them in echoes of higher and lower pitch. The "Talk" mode partially auto-tunes human speech. So if you're talking to somebody, their sentences become harmonized into song. But other modes are actually useful and interesting—especially what they promise for the future of "augmented hearing." One of my favorites is called "Auto Volume," which silences ambient noise but turns up and clarifies human speech. It's great for working around the house where you want to concentrate, but also want to be available to interact with family members. The "Sleep" and "Relax" modes give you good old-fashioned white noise, but also integrates actual sounds into the track. This is a powerful trick. Canned white noise generators ignore or drown out actual noises in the environment. The "Sleep," "Relax," and a few other modes capture and integrate some ambient sounds, turning them into part of the white noise. The effect is a more fluid and, for lack of a better term, "believable" white noise. "Super hearing" simply cranks up the volume, enabling you to hear that fly on the other side of the room as if it was right next to your ear. Note that it's possible for the "Super hearing" mode to be abused. One could easily imagine the strategic placement of a smartphone within microphone range of a private conversation while running this app in "Super hearing" mode, with a snoop listening via a Bluetooth headset that has no microphone. Add this to the privacy risks associated with smartphones. Each of the modes has several sliders for precisely customizing sounds. The sliders have strange names and do unexpected things. For example, the customizable sliders for the "Office" mode are "Detach," "Time Scramble," "Unhumanize" and "Volume." You can't predict the effect without trial and error. The Hear app is a tiny glimpse into a future where we'll be able to pick and choose as well as process noises in our environment to customize exactly what we want to hear. It also presages the use of processed sound to simulate drug experiences, relieve boredom or enhance mood. The Future of Noise The way all augmented hearing works is that microphones capture actual sounds in the environment. Then a computer chip processes those sounds before sending audio to the human ear. Software is able to tease out, identify, separate and individually process different kinds of sound. This can be done with an app, such as the Hear app. But it also might happen in customized hardware. A new generation of earbuds connect to your smartphone via Bluetooth. You then use a complementary app to control what you hear and what you don't. The earbuds will give you hands-free calls and music. But they also process and enable the customization of the noise in the environment. Early products in this space have names including Nuheara IQbuds, Doppler Here and Bragi Dash. We've all been in a crowded, noisy room and tried to have a conversation. Wouldn't it be great to silence the din of chatter and music and boost the sound of the other person's talking? Or, conversely, let's say you're listening to a great musician on stage, but the people around you are chattering away. Wouldn't it be great to silence those people and amplify the music? One product in this space, the Doppler Here, actually has a "reduce baby" feature that filters out the sound of a baby crying so you can't hear it. That would be pretty handy during those overseas and red-eye flights, when you need to sleep. Science has demonstrated that the right kind of sound can enhance creativity and productivity. Intuition tells us that blocking certain sounds can enhance mood by filtering out annoying sounds. I wonder what other mental benefits can be produced with the right kind of noise processing? Undoubtedly, everyone will want the ability to exert control over the sounds one hears. This revolution in selective hearing is coming to us in multiple formats. It will be built into phones, earbuds, headphones and more. And when the custom tailoring of sound is a normal consumer electronics feature—when sounds can be boosted, enhanced for speech and more—hearing aids will become obsolete. Duplicating the functionality of hearing aids will be simply one of the options in one's augmented hearing app of choice. I would even go so far as to predict that, like people who wear hearing aids, most consumers will get in the habit of wearing augmented hearing hardware in their ears during all their waking hours. As is often the case, the biggest constraint on this technology is battery power, which is never enough. Despite that limitation, I think we'll see over the next five years the total mainstreaming of augmented hearing. Technology will let us hear whatever we want to hear, and filter out the rest. Source: http://www.eweek.com/mobile/were-entering-the-era-of-augmented-hearing-and-white-noise.html
  18. Microsoft and its partners announce new Windows 10-powered IoT devices at the Hannover Messe conference in Germany.Manufacturing and the Internet of things (IoT) go hand-in-hand. At the Hannover Messe industrial technology conference in Germany this week, Microsoft and select partner companies are demonstrating how Windows 10 can help enable intelligent, IoT-enabled business processes for factories, equipment makers and suppliers. Complementing the company's new cloud-based Azure IoT device management capabilities and the Azure IoT Gateway SDK (software development kit), also announced this week, Microsoft enlisted some technology partners, including Dell, to introduce Windows-based devices for connected enterprises. Dell's contribution is the new Dell Edge Gateway 5100 running Windows 10 IoT Enterprise. Billed as Dell's "most industrial IoT device," the hardware provides built-in data capture and edge analytics capabilities in an enclosure that can take the punishment doled out by factory floors and other demanding environments. "It is a rugged device built for industrial environments including support for extended temperature ranges,"blogged Craig Dewar, senior director of Microsoft Windows Commercial marketing. "Dell also launched five new accessories for the Edge Gateways, including I/O and power modules, ZigBee module, CAN bus card, and IP65 rugged enclosure." Powered by a dual-core Intel Atom processor, the 5100 features a fanless, solid-state design that can withstand temperatures as high as 70°C (158°F) or as low as -30°C (-22°F). An optional enclosure provides added protection against oil, dust and other contaminants. Next year, Liebherr Group plans to launch a pharmaceutical refrigerator that will use Windows IoT and Azure Stream Analytics, Microsoft's cloud-based real-time IoT data analytics platform, to help hospitals store drugs and medical supplies at the proper temperature. Windows 10 IoT will allow Liebherr Group to support, secure and update the cooling unit, Dewar said. During Hannover Messe, Microsoft also announced support for the OPC Unified Architecture (UA) open-source software stack used by industrial manufacturing equipment makers. This will make it possible for OPC UA devices to send telemetry data to Azure and allow Microsoft's cloud customers to control their equipment remotely. "OPC UA is the single, neutral, widely accepted standard to embrace the complex world of automation devices to easily and securely connect them everywhere," commented Stefan Hoppe, vice president of the OPC Foundation, in a statement. "With the adoption by Microsoft to its Windows 10 operating system and Azure cloud, the OPC UA standard passes the critical milestone of general acceptance by the broader IT world." Meanwhile, one of Microsoft's first-ever laptops, the Windows 10-powered Surface Book, has won the approval of the industrial design software specialists at Siemens. Featuring a detachable keyboard and discrete graphics processing capabilities in higher-spec models, the Surface Book, which was first introduced in October 2015, has been certified for the Solid Edge computer-aided design (CAD) software by Siemens. Rivaling Apple's MacBook Pro, the Surface Book straddles the premium laptop and 2-in-1 device markets. Like the company's successful line of Surface Pro tablets, the Surface Book supports both touch and stylus-based input. According to Siemens, Solid Edge's user base is flocking to the hybrid laptop, making it the software's fastest-growing mobile platform in terms of adoption. Source: http://www.eweek.com/
  19. It's a small evolutionary step from spraying toner on paper to putting down layers of something more substantial until the layers add up to an object. Amazing! And yet, by enabling machine to produce objects of any shape, on the spot and as needed, 3D printing really is ushering in a new era.
  20. It’s no secret that the US government invests a lot of money in research and development efforts around more and more powerful computing systems. Some of that money goes to researchers who spend time pushing the boundaries of energy efficiency of computers and data centers. The latest example of this investment is a grant to an assistant professor at theRochester Institute of Technology who believes it’s possible to achieve significant energy efficiency improvements in data centers by eliminating physical interconnects both within and between servers. Amlan Ganguly, a faculty member at RIT’s Kate Gleason College of Engineering, has been publishing research papers on wireless and photonic communication mechanisms within circuits for several years now. His next project is to scale that approach beyond the chip, to enable wireless interconnection between components of a server and between servers in a data center. The nearly $600,000 grant from theNational Science Foundation will fund those effort over the next five years. “We want to revolutionize that mechanism of communication within servers with wireless interconnects,” Ganguly said in a statement. “The crux of the approach is to replace the legacy internet type of connections with the novel wireless technology which we project to be significantly more power efficient than the current state of the art.” He described the project as “high-risk,” citing significant challenges with interconnecting what could be tens to hundreds of servers with the same wireless frequency. There’s a lot of crosstalk, or interference, which makes it challenging to create an effective way to manage that communication. This is not the first time the NSF has funded a research project Ganguly has been involved in. At least one project where he was the lead investigator and three others where he participated in a non-principal role have received grants from the foundation over the last seven years. Most of them were projects that researched wireless on-chip communications. Source: http://www.thewhir.com/web-hosting-news/wireless-interconnects-promise-big-data-center-efficiency-wins?utm_source=internal-link&utm_medium=foot-link&utm_campaign=previous
  21. One Intel forecast about the future of computing and data centers helps put the company’s restructuring announcement this week in perspective. Between 70 and 80 percent of systems going into data centers ten years from now will be what the processor giant calls “scale computing” systems. “We see that the world is moving to scale computing in data centers,” Jason Waxman, an Intel VP and general manager of the company’s Cloud Platforms Group, said in an interview. “Our projection is between 70 and 80 percent of the compute, network, and storage will be going into what we call scale data centers by 2025.” Scale data centers are data centers designed the same way web giants like Google, Microsoft, and Facebook design their facilities and IT systems today. Intel isn’t saying most data centers will be the size of Google or Facebook data centers, but it is saying that most of them will be designed using the same principles, to deliver computing at scale. Things like the three major forms of cloud computing (IT infrastructure, platform, or software delivered as subscription services), connected cars, personalized healthcare, and so on, all require large scale. “If you’re doing a connected-car type of solution, that’s not a small-scale type of deployment,” Waxman said. “If you’re doing healthcare and you’re trying to do personalized medicine, that’s a large-scale deployment.” These solutions, which require an approach to infrastructure that’s different from what most companies are used to, are on the rise, and a substantial portion of the world’s IT capacity already sits in scale data centers. “Right now, about 40 percent is already there, so you’re talking about a continued move toward deploying technology at scale,” Waxman said. Intel Restructures to Focus on Cloud, IoT This shift will affect virtually every industry and it is a big opportunity for Intel, which is looking at the data center market as its best bet going forward, faced with slowly but steadily dwindling revenue from PC parts and a weak position in the mobile chip market. The company’s execs were upfront about this on this week’s first-quarter earnings call with analysts, when they announced the restructuring plan. Intel is shifting from being primarily a PC company to a company that powers the cloud and connected devices, the so-called Internet of Things, CEO Brian Krzanich said on the call. Already, “40 percent of our revenue and 60 percent of our margin comes from areas other than PC,” so it’s time to push the company all the way toward pursuing a non-PC-focused strategy, he said. The restructuring plan, which includes letting go of 12,000 people, or about 11 percent of Intel’s workforce, is meant to free up resources to invest in data center, IoT, and memory segments, the three fastest-growing businesses within Intel, Krzanich said. The company expects to free up $750 million this year and $1.4 billion every 12 months by mid-2017 as a result. FPGAs Expected to Accelerate More than Servers The strength of Intel’s position in the cloud infrastructure market is undeniable. Its chips power virtually every cloud server in the world, and it has collaborated with the big scale data center operators on chip and server design for years, customizing solutions for their needs, so it has a lot of unique insight into the needs of scale infrastructure. One of the key technologies Intel expects to accelerate growth in its data center business are Field Programmable Gate Arrays, which became a focus for the company during its collaboration with big cloud providers, namely Microsoft, which about two years ago started looking into combining CPUs with FPGAs. These are programmable chips that enable companies to reconfigure servers quickly to optimize them for different types of workloads and to offload some processing work from CPUs, a method known as workload acceleration, used widely in supercomputer design. Diane Bryant, general manager of Intel’s Data Center Group,talked about a hybrid chipthat would combine Xeon E5, its flagship server CPU, with an FPGA in 2014. The company doubled down on FPGA investment last year with a $16.7 billion acquisition of FPGA specialist Altera. Just recently, it started shipping the first samples of single-socket Xeon/FPGA packages to customers, Krzanich said this week. Intel developed the product together with multiple large cloud providers, and “Microsoft was a definitional customer for them,” according to Waxman. Beyond the Server Chassis From a technology standpoint, Intel is looking at what the fundamental building blocks of scale data center infrastructure will look like, Waxman said. The days of that fundamental building block being an entire server, complete with motherboard, memory, storage, network, cooling, power supply, network cards, etc., are coming to an end. Scale computing operators think in terms of entire racks, where resources are optimized for specific applications, and where individual components, not entire servers, can be swapped out when needed. “We think the need to design rack-level solutions that allow you to create pools of compute, network, and storage that can be provisioned by a cloud is more important than it has ever been,” Waxman said. A lot of the work Intel has done on rack-scale architecture has been done in collaboration with Facebook and the Open Compute Project, the open source hardware and data center design initiative Facebook founded in 2011. Waxman has been on the OCP board since its inception, and the vision Intel had when it joined the project back then has largely come true, he said. That vision was that the world would eventually shift to scale computing, and Facebook and other OCP members were at the forefront of that shift. Toward a 100 Percent Scale IT World So how will this shift happen exactly, and what will it look like for smaller enterprise IT shops? It doesn’t mean every IT team will start deploying web-scale infrastructure in their data centers. What they do will depend on the nature of their business. Companies whose infrastructure is at the core of the business and who need full control of it will go the scale data center route, no matter how small. These are companies like engineering services firms, Software-as-a-Service providers, companies in healthcare or other compliance-sensitive verticals, Waxman said. And companies whose business doesn’t revolve around IT infrastructure will eventually replace on-premise IT with various cloud services, he said. Either way, most of the world’s software will end up running in scale data centers. Source: http://www.thewhir.com/web-hosting-news/intel-world-will-switch-to-scale-data-centers-by-2025
  22. Mobility and agility are the two key concepts for the new decade of computing innovation. At the epicenter of this new enabled computing trend is cloud computing. Virtualization and its highly scaled big brother, cloud computing, will change our technology-centered lives forever. These technologies will enable us to do more; more communicating, more learning, more global business and more computing with less — less money, less hardware, less data loss and less hassle. During this decade, everything you do in the way of technology will move to the data center, whether it's an on-premises data center or a remote cloud architecture data center thousands of miles away. 1. Mobile ComputingTen trends for the next 10 years. An era of agile computing is upon us. Keep an eye on these 10 server-oriented technology trends. As more workers report to their virtual offices from remote locations, computer manufacturers must supply this new breed of on-the-go worker with sturdier products loaded with the ability to connect to, and use, any available type of Internet connectivity. Mobile users look for lightweight, durable, easy-to-use devices that "just work," with no lengthy or complex configuration and setup. This agility will come from these smart devices' ability to pull data from cloud-based applications. Your applications, your data and even your computing environment (formerly known as the operating system) will live comfortably in the cloud to allow for maximum mobility. 2. VirtualizationBy the end of this decade, virtualization technology will touch every data center in the world. Companies of all sizes will either convert their physical infrastructures to virtual hosts and guests or they'll move to an entirely hosted virtual infrastructure. As more business owners attempt to extend their technology refresh cycle, virtualization's seductive money-saving promise brings new hope to stressed budgets as we collectively pull out of the recession. The global move to virtualization will also put pressure on computer manufacturers to deliver greener hardware for less green. 3. Cloud ComputingCloud computing, closely tied to virtualization and mobile computing, is the technology that industry observers view as "marketing hype" or old technology repackaged for contemporary consumption. Beyond the hype and relabeling, savvy technology companies will leverage cloud computing to present their products and services to a global audience at a fraction of the cost of current offerings. Cloud computing also protects online ventures with an "always on" philosophy, guaranteeing their services will never suffer an outage. Entire business infrastructures will migrate to the cloud during this new decade, making every company a globally accessible one. 4. Web-based ApplicationsHeavy, locally installed applications will cease to exist by the end of the decade. This move will occur ahead of the move to virtual desktops. The future of client/server computing is server-based applications and client. Everything, including the client software, will remain on a remote server. Your client device (e.g., cell phone, computer, ebook reader) will call applications to itself much like the X Terminals of yesteryear. 5. LibrariesBy the end of this decade, printed material will all but disappear in favor of its digital counterpart. Digitization of printed material will be the swan song for libraries, as all but the most valuable printed manuscripts will head to the world's recycling bins. Libraries, as we know them, will cease operation and likely reopen as book museums where schoolchildren will see how we used physical books back in the old days. 6. Open Source MigrationWhy suffer under the weight of license fees when you can reclaim those lost dollars with a move to open source software? Companies that can't afford to throw away money on licensing fees will move to open source software including Linux, Apache, Tomcat, PostgreSQL and MariaDB. This decade will prove that the open source model works, and the proprietary software model does not. 7. Virtual DesktopsVirtual Desktop Infrastructure (VDI) has everyone's attention these days and will continue to hold it for the next few years as businesses move away from local desktop operating systems to virtual ones housed in data centers. This concept ties into mobile computing, virtualization and cloud computing. Desktops will likely reside in all three locations (PC, data center, cloud) for a few more years, but the transition will approach 100 percent non-local by the end of the decade. Moving away from localized desktop computing will result in lowering maintenance bills and alleviating much of the user error associated with desktop operating systems. 8. Internet EverywhereYou've heard of the Internet haven't you? Do you remember when it was known as The Information Superhighway and all of the discussions and predictions about how it would change our lives forever? The future is here and the predictions came true. The next step in the evolution of the Internet is to have it available everywhere: supermarket, service station, restaurant, bar, mall and automobile. Internet access will exist everywhere by the end of this new decade. Every piece of electronic gadgetry (yes, even your toaster) will have some sort of Internet connectivity due in part to the move to IPv6. 9. Online StorageCurrently, online storage is still a geek thing with limited appeal. So many of us have portable USB hard drives, flash drives and DVD burners that online storage is more of a luxury than a necessity. However, the approaching mobile computing tsunami will require you to have access to your data on any device with which you're working. Even the most portable storage device will prove unwieldy for the user who needs her data without fumbling with an external hard drive and USB cable. Much like cell phones and monthly minutes plans, new devices will come bundled with an allotment of online storage space. 10. TelephonyAs dependence on cell phones increases, manufacturers will create new phones that will make the iPod look like a stone tool. They won't resemble current phones in appearance or function. You'll have one device that replaces your phone, your computer, your GPS and your ebook reader. Yet another paradigm shift brought about by the magic of cloud computing. Telephony, as we know it, will fall away into the cloud as Communication as a Service (CaaS). Moving communications to the data center with services such as Skype and other VoIP is a current reality, and large-scale migrations will soon follow. Source: http://www.serverwatch.com/trends/article.php/3859911/Top-10-Server-Technology-Trends-for-the-New-Decade.htm
×
×
  • Create New...