LG – New 4k monitor shows 4 full HD packed displays in one

LG’s latest monitor should tick all the boxes for those after a cleaner multiple monitor setup. With a screen size of 42.5 inches, the catchily named 43UD79-B can display a single source in 3840 x 2160 resolution full screen, or up to four screens of full HD from different sources all at once.

Boasting an IPS panel with a contrast ratio of 1,000:1, a 178-degree viewing angle and 60 Hz refresh rate, the monitor packs a total of seven ports around back. These include four HDMI ports (two HDMI 2.0 and two HDMI 1.4), one DisplayPort 1.2a input, one USB-C port that can take a DisplayPort signal, and an RS-232C terminal. There are also two USB 3.0 ports that allow a keyboard and mouse to be connected to control two computers at once, and a 3.5 mm headphone jack and two 10 W speakers built in.The screen can be split in a variety of configurations, ranging from a single unified display, to two-, three- and four-screen setups. When displaying four screens of full HD video each “screen” will be roughly 21.3 inches in size, but the monitor also supports picture-in-picture (PIP), which lets you keep an eye on different sources while a 4K image takes up the entire display. You can switch between the various configurations with the included remote control.

While it looks like just the thing for financial wheelers and dealers, LG is also aiming the monitor at gamers with support for Dynamic Action Sync (DAS) to reduce input lag, and Black Stabilizer and Game Mode. It’s also compatible with AMD’s FreeSync, which reduces image tears and choppiness.

The 43UD79-B measures 967 x 648 x 275 mm (38.1 x 25.5 x 10.8 in) and weighs 15.9 kg (35 lb) with stand, and 967 x 575 x 71 mm (38.1 x 22.6 x 2.8 in) and 12.3 kg (22.1 lb) without.

It’s set to go on sale in Japan on May 19 for around ¥83,000 (US$740).

Source:LG Japan

Change your iphone into personal Air Quality monitor

It’s easy enough to check the ingredients on the foods that we buy, but how do we know what’s in the air that we breathe? Well, there are air-quality-monitoring devices, although they’re mostly designed to sit in one place. Silicon Valley-based Sprimo Labs has developed what it claims is a more practical and portable alternative, in the form of its Sprimo Personal Air Monitor. The tiny gadget simply plugs into an iPhone’s Lightning port.
 

The Sprimo doesn’t require batteries, nor does it even need to be turned on. Once it’s plugged into the phone and the accompanying app is launched, it just starts measuring the temperature, humidity and quality of the surrounding air.


The quality reading is based on the levels of volatile organic compounds (VOCs) in the air, and is expressed on the phone’s screen as a numerical and color value – low numbers and a green color are good, medium numbers and a yellow color are OK, while high numbers and a red color are nasty.

Of course, what users do next is up to them. They could just leave the area, or set about removing the source of the VOCs, if applicable – things like toxic carpeting and cigarette smoke can be removed from an indoor setting, for example, while it’s a little more difficult to get automobile exhaust out of an outdoor setting.
Additionally, the app allows users to become part of a Sprimo community, in which air quality readings from multiple users in multiple locations are displayed on a city map.

The device is currently the subject of a Kickstarter campaign, where a pledge of US$20 will get you one – assuming it reaches production, that is. The planned retail price is $40. And yes, an Android version is said to be on its way.

We have seen some other smartphone-based air-monitoring devices in the works lately, although the recently-crowdfunded Atmotubeis one that might particularly give the Sprimo a run for its money.

Source: Google

Chronos- The Next level of wifi

There’s a lot of buzz around “smart home” products and the convenience of advanced automation and mobile connectivity. However, new research may soon be able to add extra emphasis on “smart” by enhancing wireless technology with greater awareness. A team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a system that enables a single wireless access point to accurately locate users down to a tenth of a meter, without any added sensors.

Wireless networks are good at quickly identifying devices that come within range. Once you link several access points together, it becomes possible to zero in on someone’s position by triangulation. But this new wireless technology – dubbed “Chronos” – is capable of 20 times the accuracy of existing localization methods. Through experiments led by Professor Dina Katabi, Chronos has been shown to correctly distinguish individuals inside a store from those outside up to 97 percent of the time, which would make it easier for free Wi-Fi in coffee shops to be a customer-only affair, for example.

Chronos is able to achieve such accuracy by resolving the actual distance from a user to an access point – data’s “time-of-flight” is multiplied by the speed of light. When computed with the angle, positions can be determined to the centimeter. By designing Chronos to quickly hop across different frequency channels, take measurements, and then “stitch” together the results, the team has enhanced commercial Wi-Fi with a high-level accuracy normally limited to expensive ultra-wideband radio.
The system had to be programmed to account for additional delays in the process. A Wi-Fi encoding method helps to distinguish packets of data from actual time-of-flight, and acknowledgements from data packets are used to cancel out “phase offsets” generated by all the band-hopping. The researchers also developed an algorithm to address bouncing signals and the separate delays experienced by each copy.

With Wi-Fi that can pinpoint other wireless devices, we can envision a future where drones maintain a safe distance from people or other drones, misplaced wireless devices may be recovered more efficiently, homes could identify individuals within and adapt heating and/or lighting accordingly, and wireless access can be extended exclusively to select rooms or living areas.

A paper on the research was recently presented at the USENIX Symposium on Networked Systems Design and Implementation (NSDI ’16).
The video below shows a live demonstration of the Chronos system.

Source: Massachusetts Institute of Technology

Artificial Synapse – Neural Networks (brainier computer)

The human brain is nature’s most powerful processor, so it’s not surprising that developing computers that mimic it has been a long-term goal. Neural networks, the artificial intelligence systems that learn in a very human-like way, are the closest models we have, and now Stanford scientists have developed an organic artificial synapse, inching us closer to making computers more efficient learners.
In an organic brain, neuronal cells send electrical signals to each other to process and store information. Neurons are separated by small gaps called synapses, which allow the cells to pass the signals to each other, and every time that crossing is made, that connection gets stronger, requiring less energy each time after. That strengthening of a connection is how the brain learns, and the fact that processing the information also stores it is what makes the brain such a lean, mean, learning machine.
Neural networks model this on a software level. These AI systems are great for handling huge amounts of data, and like the human brain that inspired them, the more information they’re fed, the better they become at their job. Recognizing and sorting images and sounds are their main area of expertise at the moment, and these systems are driving autonomous cars, beating humanity’s best Go players, creating trippy works of art and even teaching each other. The problem is, these intelligent software systems are still running on traditional computer hardware, meaning they aren’t as energy efficient as they could be.
“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” says Yoeri van de Burgt, lead author of the study. “Instead of simulating a neural network, our work is trying to make a neural network.”
So the team set about building a physical, artificial synapse that mimics the real thing by processing and storing information simultaneously. Based on a battery and working like a transistor, the device is made up of two thin films and three terminals, with salty water acting as an electrolyte between them. Electrical signals jump between two of the three terminals at a time, controlled by the third
First, the researchers trained the synapse by sending various electric signals through it, to figure out what voltage they need to apply to get it to switch into a certain electrical state. Digital transistors have two states – zero and one – but with its three terminal layout, the artificial synapse is capable of having up to 500 different states programmed in, exponentially expanding the computational power it could be capable of.
Better still, switching between states takes a fraction of the energy of other systems. That’s still not in the ballpark of a brain – the artificial synapse uses 10,000 times the energy of a biological one – but it’s a step in the right direction, and with further testing in smaller devices, the researchers hope to eventually improve that efficiency.
“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” says A. Alec Talin, senior author of the study. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”
While only one artificial synapse has been built so far, the team ran extensive experiments on it, and extrapolated the data gathered to simulate how an array of artificial synapses could process information. Making use of the visual recognition skills of a neural network, the researchers tested its ability to identify handwritten numbers – 0 to 9 – in three different styles, and found that the system could recognize the digits up to 97 percent of the time.
Earlier examples of artificial synapses, like that from USC in 2011, were not only less powerful, but weren’t made completely from organic materials. Composed mostly of hydrogen and carbon and running on the same voltages as human neurons, the Stanford synapse could eventually integrate with biological brains, opening up the possibility of devices that can be more directly controlled by thought, like prosthetics and brain-machine interfaces.
The next step for the researchers is to test the simulated results by producing a physical array of the artificial synapses.
The research was published in the journal Nature Materials.
Source: Stanford University

Two way LEDs turns your screen into Chargers

​Could it one day be possible to top up your phone’s battery from ambient light? Companies like Japan’s Kyocera, who have solar-powered displays in the works, certainly think so, as do material scientists at the University of Illinois at Urbana-Champaign who have developed multi-purpose LED arrays that absorb light and turn it into electricity, (and pack a number of other neat tricks as well).

The LED arrays consist of tiny nanorods arranged on a thin film that are made from three types of semiconductor material and measure less than five nanometers in diameter. One of these materials both emits and absorbs visible light, while the other two materials facilitate how electrons flow through the first. This combination gives the LEDs the ability to emit, sense and respond to visible light.
They do this simultaneously by switching really quickly between emitting mode and detecting mode. This happens so quickly, three orders of magnitude faster than a standard display refreshes, that it is imperceptible to the human eye, meaning that the display appears as if it is constantly alight.
When it does detect light, it behaves in a similar way to a solar cell, absorbing it through the photovoltaic effect. At the moment, this is only on a very small scale, but the researchers are buoyed by their early results and believe that they can work toward a self-powered LED display that doesn’t compromise on performance.
“The key improvement would be in the device being able to absorb much more of the ambient light,” Moonsub Shim, lead author of the study, explains to New Atlas. “However, displays also need to emit light and that imposes a limitation. I think there are ways around this problem but further research is needed.”
Whether this means that all of – or just the majority – of the display’s power would come from the array itself is unknown at this stage, nevertheless Shim tells us he is “optimistic about the prospect of powering by harvesting ambient light.”
Using this light-detection capability to generate power might be the long game, but the researchers say it could offer some highly useful functionality in the shorter term, too. Because the arrays can be programmed to react to light signals, it could be used to create interactive displays that recognize objects or respond to touch-less gestures, such as a wave, an approaching finger or a laser stylus in a electronic whiteboard-type setup.
What’s more, they could automatically adjust their brightness depending on ambient light. On one hand, this is similar to how some laptops and mobile devices will automatically dim when the room is brighter, on the other hand, it could offer something more.
“Most tablets and cell phones have a separate photodetector somewhere on the device to monitor ambient lighting to adjust screen brightness,” explains Shim. “But because there is only one, or a limited number, the brightness of the entire screen is adjusted to the same level. Furthermore, you need to build in these photodetectors separately in addition to the components of the display. With our LEDs, if they were integrated into displays as emissive pixels, you wouldn’t need separate photodetectors.”
Where this could come in especially handy is when using a device in an unevenly lit environment, because it means that each pixel’s brightness could be adjusted independently.
“So if your screen was partly under the shade, it would adjust each part differently so that the viewing is much more uniform,” Shim continues. “This ability to detect light at the individual pixel level while having each pixel appear to continuously emit light provides a whole new platform to develop tablets, cell phones, TVs and other displays with new capabilities, not just adjusting brightness.”
It is clear that a lot more work is needed, and the researchers say they are only touching the surface of what is possible, but Shim tells us the early results have them very excited. Their demonstrations so far have involved only red LEDs and they are now working on methods that would incorporate green and blue pixels too, as well as tweaking the composition of the nanorods with a view to boosting their light-harvesting capacity.
The research was published in the journal Science.
Source: University of Illinois at Urbana-Champaign

IOT-Internet Of Things introduction

What Is The Internet of Things (IoT)
The Internet of Things may be a hot topic in the industry but it’s not a new concept. In the early 2000’s,
Kevin Ashton was laying the groundwork for what would become the Internet of Things (IoT) at MIT’s
AutoID lab. Ashton was one of the pioneers who conceived this notion as he searched for ways that
Proctor & Gamble could improve its business by linking RFID information to the Internet. The concept
was simple but powerful. If all objects in daily life were equipped with identifiers and wireless connectivity,
these objects could be communicate with each other and be managed by computers. In a 1999 article
for the RFID Journal Ashton wrote:
“If we had computers that knew everything there was to know about things—using data they gathered
without any help from us — we would be able to track and count everything, and greatly reduce waste,
loss and cost. We would know when things needed replacing, repairing or recalling, and whether they
were fresh or past their best. We need to empower computers with their own means of gathering
information, so they can see, hear and smell the world for themselves, in all its random glory. RFID and
sensor technology enable computers to observe, identify and understand the world—without the
limitations of human-entered data.”1
At the time, this vision required major technology improvements. After all, how would we connect
everything on the planet? What type of wireless communications could be built into devices? What
changes would need to be made to the existing Internet infrastructure to support billions of new devices
communicating? What would power these devices? What must be developed to make the solutions cost
effective? There were more questions than answers to the IoT concepts in 1999.
Today, many of these obstacles have been solved. The size and cost of wireless radios has dropped
tremendously. IPv6 allows us to assign a communications address to billions of devices. Electronics
companies are building Wi-Fi and cellular wireless connectivity into a wide range of devices. ABI
Research estimates over five billion wireless chips will ship in 2013.2 Mobile data coverage has improved
significantly with many networks offering broadband speeds. While not perfect, battery technology has
improved and solar recharging has been built into numerous devices. There will be billions of objects
connecting to the network with the next several years