Sunday 1 June 2014

Next 5 Life-Changing Tech Innovations

Classrooms learn you
If children can't learn the way we teach, why don't we teach the way they learn? This question captures IBM's vision for learning classrooms that track the progress of each student and then personalize coursework accordingly. Teachers naturally adapt to the needs of each student, but IBM says cloud-based systems will "go much further" by automatically creating customized lesson plans and tailoring coursework for specific careers. This will enable schools to "reach more students in more meaningful ways," says IBM. With students leaning at their own pace, we'll move beyond the tyranny of grades.

 Buying local beats buying online
Local retailers will fight back and become "better than e-tailers can ever hope to be," according to IBM, by merging the tactility and immediacy of physical retail with advances in augmented reality, wearable computing, and location intelligence. Given the sales trends of recent years, that's a pretty bold claim. But IBM says these digital tools will give customers a richer in-store experience, giving retailers an opportunity to create more immersive and personalized shopping experiences.

 

 Doctors routinely analyze DNA
No doubt you've read about individualized medicine. In the case of cancer, for example, cutting-edge healthcare organizations are personalizing treatments based on the DNA of the patient and his or her tumors. Today this approach to cancer treatment is all too rare, and even then it's time- and cost-prohibitive. IBM's bet is that within five years, DNA sequencing will take less than a day, and cloud-based systems will crunch reams of medical information to help doctors come up with individualized treatment plans.

 

 Digital guardians protect your e-life

It knows when you are surfing. It knows that you're up late. It knows when you're logged into your real bank account, and it knows when it's a fake.
This isn't some privacy-invading Santa Claus; it's IBM's vision for digital guardians that know your digital life inside and out, and it's not just talking about computer and smartphone interactions. Guardians will know your car, your house, and all your connected devices. And instead of relying on fixed rules and passwords, they'll analyze contextual, situational, and historical data to verify your identity and your actions on different devices.

 Cities will learn to be more livable
Within five years, city leaders will tap social feedback from citizens to know when and where resources are needed so the city can dynamically adapt. IBM says it has researchers working in Brazil to develop a crowdsourcing tool that allows citizens to report accessibility problems via mobile phones. That's a step toward helping people with disabilities better navigate urban streets, and with the World Cup and Summer Olympics headed to Brazil, it's a step in the right direction. You can also look forward to Internet of Things-type deployments where sensors track movement of traffic and people in transit systems, triggering traffic signals, adjustments in train schedules, and similar things to adapt and optimize.
We've already reported on Boston's pothole app and Louisville, Kentucky's, smartphone-mapped asthma-tracking app,

 

Google’s new self-driving car: Electric, no steering wheel

You are looking at Google’s very own, built-from-scratch-in-Detroit self-driving car. The battery-powered electric vehicle has as a stop-go button, but no steering wheel or pedals. The plan is to build around 200 of the mostly-plastic cars over the next year, with road testing probably restricted to California for the next year or two. . Google’s new self-driving car is incredibly cutesy, closely resembling a Little Tikes plastic car — there’s even the same damn smiley face on the front. The cutesy appearance is undoubtedly a clever move to reduce apprehension towards the safety or long-term effects of autonomous vehicles — “Aw, how can something so cute be dangerous?”

Disappointingly, Google’s new car still has a ton of expensive hardware — radar, lidar, 360-degree cameras — sitting on a tripod on the roof. This is to ensure good sightlines around the vehicle, but it’s a shame that Google hasn’t yet worked out how to build the hardware into the car itself, like other car makers that are toying with self-driving-like functionality. (Or maybe it has, but doesn’t want to invest additional money and engineering time until it’s time to commercialize the car.) In the concept art below, you can see that the eventual goal might be to build the computer vision and ranging hardware into a slightly less ugly rooftop beacon

These first prototypes are mostly of plastic construction, with battery/electric propulsion limited to a max speed of 25 mph (40 kph). Instead of an engine or “frunk,” there’s a foam bulkhead at the front of the car to protect the passengers. Internally, there’s just a couple of seats, and some great big windows so you can enjoy the views (which must surely be one of the best perks of riding in a self-driving car).

Removing everything except for a stop-go button might sound like a good idea, but it’s naive. How do you move the car a few feet, so someone can get out, or for backing up to a trailer? Will Google’s software allow for temporary double parking, or off-road for a concert or party? Can you choose which parking spot the car will use, to leave the better/closer parking spots for your doddery grandfather? How will these cars handle the very “human” problems of giving way for other cars and pedestrians? Can you program the car to give way to a hot girl, but not an angry-looking trucker?

Google is now safety testing some early units, and will hopefully scale up production to around 200 cars that could be on the road “within the year.”

New optical brain scanner can see your brain ‘blush’, rivals PET & MRI without using radiation or super-magnets

 Breakthroughs in scanning technology are often couched in terms of their potential for real-world impacts; a better PET scanner or MRI machine is certainly nice, but ultimately limited by the dangers of radiation exposure or the costs of superconducting magnet rigs. So, doctors and researchers have longed for a way to collect some information cheaply and without adding any extra risks to the patient. Optical scanners — as in, plain old safe infrared light — have been making progress toward this goal for some time, but this week Nature published findings that imply the technology could finally be ready for the big time.

When an area of the brain starts working, its use of oxygen spikes dramatically. In a functional (real-time) MRI scan, the magnetic differences between oxygen-rich and oxygen-deprived blood let operators see where and when neurons are firing, but the super-powerful magnets needed for this technique mean the machines are far too big and expensive for truly widespread use.
On the other hand, DOT scans detect oxygenation by watching for changes in colour and intensity of low intensity light beams shone straight into your noggin. Senior author of the study Joseph Culver compared the process to seeing embarrassment in a rush of blood to the cheeks — though in this case, the “cheeks” are the brain lobes inside your skull, and the “embarrassment” could be anything from outright lust to complex logical thought.
Penetrating your head with light might sound dangerous, but the non-ionizing radiation used by a DOT machine is theoretically incapable of causing damage — which is more than you can say for PET scanning. PET scans involve injecting a patient with radioactive isotopes that we can watch move through the body; it’s a calculated risk, and often avoided because the potential to harm outweighs the potential to help.

 If the researchers can prove that there is a viable biomedical market for their invention, however, expect these performance numbers to spike dramatically. Right now this is being suggested for use in children or those with implants — people who cannot take conventional scans as easily as most of us. Still, with all the potential advantages in cost and safety, the team’s long-term goals should be much, much loftier than that.

Intel pushes photonic tech for the data centre 

Chipmaker Intel is rallying some key industry names behind its 100G CLR4 photonic communication technology which it claims will revolutionise the data centre.
The technology aims to cure a problem that large data centres have moving high bandwidth across longer lengths of fibre cable.
Intel has formed a 100G CLR4 Alliance with members including Arista, Brocade, Dell, Ebay, HP, and Oclaro. The idea is that the Alliance will create an open specification for a cost-effective, low-power 100G CWDM optical link with a reach of up to 2km.

Photonic Communication

Mario Paniccia, Intel fellow and general manager of the Silicon Photonics Solutions Group, said that there needs to be industry-wide specifications or standards that ultimately make things work better together and more importantly, drive down costs. He said that this would help growth of the data centre industry.
Paniccia said that photonic communication was a great way to move data in a data centre as they become massive and need longer reaches for connectivity.
"Optical has the known benefits of moving data further than electrical links, transmitting data faster and not being affected by electro-magnetic interference. As we move from 10Gbps to 25Gbps signalling, optical communication becomes even more important," Paniccia said.
There are telecom centric optical transceivers operating at 100Gbps, but their power, size and costs are non-starters for the new data centre."There is a huge gap that needs to be filled for reaches that span from say 100m to 2km. And that's the problem we are trying to address here," Paniccia said.
Andy Bechtolsheim, founder, chairman, and chief development officer of Arista Networks said that the alliance will speed up the industry's ability to make cost effective low-power, 100G CRL4 QSFP form factor optics that address the 2km reach requirements of large data centre customers.

 

 

Robot learns to recognize objects by its own

 
HERB, a robot butler under development at Carnegie Mellon University, can discover objects by itself

When all the humans went home for the day, a personal-assistant robot under development in a university lab recently built digital images of a pineapple and a bag of bagels that were inadvertently left on a table – and figured out how it could lift them. 

The researchers didn't even know the objects were in the room.

Instead of being frightened at their robot's independent streak, the researchers point to the feat as a highlight in their quest to build machines that can fetch items and microwave meals for people who have limited mobility or are, ahem, too busy with other chores. 
The robot, a two-armed machine called HERB (the Home Exploring Robot Butler), uses color video, a Kinect depth camera and non-visual information to build digital images of objects, which for its purposes are defined as something it can lift. 

The depth camera is particularly useful, as it provides three-dimensional shape data. Other information HERB collects include the object's location – on the floor, on a table, or in a cupboard. It can determine if it moves, and whether it is in a particular place at a particular time – say, mail in a mail slot. 

The video below illustrates how the process, called Lifelong Robotic Object Discovery (LROD), works.
At first, everything in the video lights up as a potential object, but as HERB uses the information it gathers – domain knowledge – to discriminate what is and isn't an object, the objects themselves become clearer, allowing the robot to build digital models of them.

 

 

 

 

 

A mind-reading camera that makes life GIF-able.


How many times has a moment so absolutely hilarious or unbelievably adorable unfolded before your eyes, making you wish you'd been holding a video camera? Japanese tech company Neurowear's high-tech headgear, Neurocam, aims to solve that problem for you. The device straps a camera and an electroencephalogram reader to the wearer's cranium. During moments of high-frequency electronic signals detected through the skull -- a general indication of excitement -- the camera switches on to record short five-second GIFs onto an iPhone that is somewhat awkwardly attached to the device.

 

Understanding IoT: The Internet of Thing....

 

The Internet of Things (IoT) is not some future concept, nor is it just around the corner; it has been here for some time, and it’s growing. Fueled by the expansion of wireless and cloud computing technology, more things are now connected to the internet than people. That’s all people, not just people on the internet.
What are these “things” which make up the Internet of Things? The IoT is not limited to smartphones and tablets, laptops and desktops. Every year, more and more devices are released capable of internet access, exponentially expanding the universe of internet of things devices.
Heart monitors and insulin pumps generate real-time data available to healthcare professionals caring for patients. Cattle ranchers can monitor cows in the field, not only pinpointing their location, but also identifying those who are pregnant. Power stations, remote pumps feeding oil and gas lines, and even entire assembly lines can now be accessed, monitored and controlled as part of the Internet of Things.

Your car texts you when it needs an oil change or when the tire pressure is getting low. Traffic lights send real-time data on traffic flow, allowing controllers to make adjustments to relieve congestion and helping drivers change their routes. Gas pumps provide price data to consumers, alert distributors about usage and contribute to meta data sales measurements, all information available via the Internet.

The possibilities of the IoT are limited only by the imagination.

If you think the IoT hasn’t entered your life yet, look around you. You might be surprised to discover how many devices — from cash registers to parking meters, gas pumps to washing machines — can access and transmit data over the Internet. The question to ask is not when it will effect you; it effects you now. The question is; how will you handle the increased data flow?