Forum Posts

StarckGate
Nov 24, 2020
In StarckGate News
The New Zealand–based company is pioneering a new way of reusing rockets that's distinct from what competitors like SpaceX and Blue Origin have done. New Zealand company Rocket Lab has hit a key milestone with the successful launch and recovery of its flagship Electron rocket. The mission, the firm’s 16th so far, included a soft parachute landing of the first-stage booster to the ocean for the first time. The mission: Electron was launched around 1:46 a.m. local time this morning from the company’s launch site on the southern tip of New Zealand’s North Island. The mission successfully deployed 30 satellites into low Earth orbit. Image credit Peter Beck After two minutes in flight (over 26,000 feet in the air), the first-stage booster separated from the second stage, flipped around 180 degrees, and deployed a parachute that slowed down its descent and allowed for a soft landing in the Pacific Ocean, after which crews successfully ventured out to recover it. It is the first time the company has ever attempted to recover a rocket booster. Small is better: The company specializes in small payload launches. Its 55-foot-tall Electron rocket is 3D-printed—the only rocket of its kind to be flying at the moment. Electron can’t send very heavy satellites into space (it is too lightweight), but the rise of small satellites has opened up an enormous market that Rocket Lab wants to capitalize on, especially if the company can pull off frequent flights. Rocket Lab plans to start launching from the US at Wallops Island, Virginia, starting next year. The company also has some deep space ambitions moving forward, including plans to launch a small satellite to Venus in 2023 to study the planet’s atmosphere for possible signs of life. by Neel V. Patel November 20, 2020
Rocket Lab has successfully recovered a booster for the first time content media
0
1
11
StarckGate
Oct 21, 2020
In StarckGate News
Last month, a navy blue, six-seater aircraft took off at Cranfield Airport in England. Usually, a 15-minute, 20-mile flight wouldn't be noteworthy -- but this was the world's first hydrogen fuel-cell-powered flight for a commercial-size aircraft. The plane's powertrain -- the mechanism which drives the plane, including fuel tanks and engine -- was built by ZeroAvia, a US and UK-based company developing hydrogen-electric engines. Using liquid hydrogen to feed fuel cells, the technology eliminates carbon emissions during the flight. A conventional flight today produces half the CO2 generated by flights in 1990, largely thanks to an increase in fuel efficiency. However, due to record traffic growth, driven by increasing passenger numbers and trade volume, the aviation industry is creating more emissions than ever before -- accounting for 2% of global manmade carbon emissions. This percentage is set to increase says Bobby Sethi, a senior lecturer in aviation at Cranfield University: other industries, like road transport, are "decarbonizing at a faster rate" he says, while aviation is lagging behind. Some companies are pushing ahead with climate-friendly solutions in a bid to catch up. The Electric Aviation Group's 70-seat hybrid-electric aircraft aims to reduce CO2 emissions by 75% and is expected to enter service in 2028, while Airbus recently announced that it aims to manufacture three hydrogen aircraft, sitting up to 200 passengers, by 2035. Airbus unveiled its ZEROe zero-emission concept in September 2020, image airbus But there is a long wait until these models come to market and aviation needs a solution now, says ZeroAvia founder and CEO Val Miftakhov. With funding from UK government-backed bodies including the Aerospace Technology Institute and Innovate UK, ZeroAvia wants to plug the gap as aviation technology develops, and provide a sustainable solution for short and medium haul flights. Miftakhov, who piloted ZeroAvia's test flight, says the company's technology is designed to be retrofitted into existing aircraft. He claims that ZeroAvia will have hydrogen-powered commercial planes taking to the sky in just three years. An energy-dense fuel While the spotlight has been on electric aviation for the past decade, the limitations of current battery technology restricts its expansion. Currently, lithium ion batteries are around 48 times less energy dense than kerosene, says Sethi. This means scaling up is a problem for electric aviation. The largest electric plane flown to date is the 9-seater eCaravan. It has a range of only 100 miles -- for which it requires a battery weighing 2,000 pounds. Sethi highlights that in larger planes, like a Boeing 747, the battery would far exceed the plane's maximum take-off weight. "It's just not possible unless battery technology improves significantly, which is why hydrogen is a more viable option to fuel aircraft in the future," he says. Having previously worked with electric car batteries, Miftakhov is well-versed in their pros and cons and that's why he opted for hydrogen. Compared to even the "wildest predictions for battery technology," hydrogen -- which is three times more energy dense than regular jet fuel -- has greater potential, he says. ZeroAvia predicts that by 2023, it will have developed engines that can power 10 to 20-seat aircraft flying up to 500 miles -- the distance between London and Zurich, or Paris and Barcelona. By 2026 they will be flying up to 80 passengers the same distance, says Miftakhov, enabling airlines to keep short haul routes while limiting environmental damage. The company hopes to expand to medium haul flights by 2030 -- flying over 100 passengers up to 1,000 miles, the distance between London and Rome. New fuel, new infrastructure ZeroAvia's ability to retrofit existing aircraft means it can get its hydrogen-electric technology in the air in a short time frame, says Miftakhov. Additionally, pilots won't have to retrain, as the controls and operations will be the same. But switching to a new fuel will require new infrastructure. At its base in Cranfield Airport, in collaboration with the European Marine Energy Centre (EMEC), ZeroAvia has created a model for a self-sufficient hydrogen airport. This includes an on-site, electrolysis-based hydrogen generator, hydrogen storage facilities and refueling trucks. The hydrogen used to fuel the test flight was made using 50% renewable energy, but ZeroAvia is working towards making its hydrogen production entirely renewable by the end of the year. Miftakhov says he is starting with airlines and airports that are keen to install on-site hydrogen production. article credit : Rebecca Cairns, CNN • Updated 21st October 2020
Hydrogen powered planes content media
0
0
7
StarckGate
Jul 23, 2020
In StarckGate News
Scientists of Tomsk Polytechnic University jointly with teams from the University of Chemistry and Technology, Prague and Jan Evangelista Purkyne University in Ústí nad Labem have developed a new 2-D material to produce hydrogen, which is the basis of alternative energy. The material efficiently generates hydrogen molecules from fresh, salt, and polluted water by exposure to sunlight. The results are published in ACS Applied Materials & Interfaces. "Hydrogen is an alternative source of energy. Thus, the development of hydrogen technologies can become a solution to the global energy challenge. However, there are a number of issues to solve. In particular, scientists are still searching for efficient and green methods to produce hydrogen. One of the main methods is to decompose water by exposure to sunlight. There is a lot of water on our planet, but only a few methods suitable for salt or polluted water. In addition, few use the infrared spectrum, which is 43% of all sunlight," Olga Guselnikova, one of the authors and a researcher of the TPU Research School of Chemistry & Applied Biomedical Sciences, notes. The developed material is a three-layer structure with a 1-micrometer thickness. The lower layer is a thin film of gold, the second one is made of 10-nanometer platinum, and the third is a film of metal-organic frameworks of chromium compounds and organic molecules. "During the experiments, we watered material and sealed the container to take periodic gas samples to determine the amount of hydrogen. Infrared light caused the excitation of plasmon resonance on the sample surface. Hot electrons generated on the gold film were transferred to the platinum layer. These electrons initiated the reduction of protons at the interface with the organic layer. If electrons reach the catalytic centers of metal-organic frameworks, the latter were also used to reduce protons and obtain hydrogen," Guselnikova explains. Experiments have demonstrated that 100 square centimeters of the material can generate 0.5 liters of hydrogen in an hour. It is one of the highest rates recorded for 2-D materials. "In this case, the metal-organic frame also acted as a filter. It filtered impurities and passed already purified water without impurities to the metal layer. It is very important, because, although there is a lot of water on Earth, its main volume is either salt or polluted water. Thereby, we should be ready to work with this kind of water," she notes. In the future, scientists hope to improve the material to make it efficient for both infrared and visible spectra. "The material already demonstrates a certain absorption in the visible light spectrum, but its efficiency is slightly lower than in the infrared spectrum. After improvement, it will be possible to say that the material works with 93% of the spectral volume of sunlight," Guselnikova adds.
New material can generate hydrogen from salt and polluted water content media
1
0
10
StarckGate
Jul 22, 2020
In StarckGate News
Two major breakthroughs in solar cell technology could vastly improve the way energy is harvested from the sun. The two studies, published in Nature Energy and Nature Photonics, will transform the efficiency and significantly reduce the cost of producing solar cells, scientists say. The first breakthrough involves “upconverting” low energy, non-visible light into high energy light in order to generate more electricity from the same amount of sunlight. RMIT University Researchers at RMIT University and UNSW University in Australia and the University of Kentucky in the US discovered that oxygen could be used to transfer low energy light into molecules that can be converted into electricity. “The energy from the sun is not just visible light. The spectrum is broad, including infrared light which gives us heat and ultraviolet light which can burn our skin,” said Professor Tim Schmidt from UNSW Sydney. “Most solar cells... are made from silicon, which cannot respond to light less energetic than the near infrared. This means that some parts of the light spectrum are going unused by many of our current devices and technologies.” The technique involves using tiny semiconductors known as quantum dots to absorb the low energy light and turn it into visible light to capture the energy. The second breakthrough makes use of a type of material called perovskites to create next-generation solar modules that are more efficient and stable than current commercial solar cells made of silicon. it is difficult to scale up to create solar panels several metres in length. “Scaling up is very demanding,” said Dr Luis Ono, a co-author of the study. “Any defects in the material become more pronounced so you need high-quality materials and better fabrication techniques.” A new approach makes use of multiple layers to prevent energy being lost or toxic chemicals from leaking as it degrades. A module measuring 22.4cm achieved an efficiency of 16.6 per cent – a very high efficiency for a module of that size – and maintain a high performance level even after 2,000 hours of constant use. The researchers now plan to test their techniques on larger solar modules, with the hope of commercialising the technology in the future.
SOLAR ENERGY BREAKTHROUGH CREATES ELECTRICITY FROM INVISIBLE LIGHT content media
1
0
12
StarckGate
Jul 20, 2020
In StarckGate News
Tired of standing in line? Wait a bit longer, and you may never have to again. Everyone from Amazon to Silicon Valley startups are trying to eliminate lines in retail stores. Amazon has opened 24 of its Amazon Go stores, which use cameras and artificial intelligence to see what you've taken off shelves and charge you as you walk out. Some startups such as San Francisco-based Grabango are closely mimicking Amazon's approach of using AI-powered cameras mounted in ceilings to identify what you've removed from a shelf and then charge you for those items. One of Caper's smart shopping carts. But others are trying an entirely different route to skipping the checkout: smart shopping carts. These companies have added cameras and sensors to the carts, and are using AI to tell what you've placed in them. A built-in scale weighs items, in case you have to pay by the pound for an item. Customers pay by entering a credit card, or by using Apple Pay or Google Pay. When a customer exits the store, a green light on the shopping cart indicates that their order is complete, and they're charged. If something goes wrong, the light turns red, and a store employee is summoned. One of Caper's smart shopping carts. The startups behind the smart carts, including Caper and Veeve, say it's much easier to add technology to the shopping cart than to an entire store.
Smart shopping carts content media
1
0
10
StarckGate
Jul 20, 2020
In StarckGate News
The hybrid solar energy converter features a solar module with glowing red cells built at Tulane. Credit: Photo courtesy of Matthew Escarra Tulane University researchers are part of a team of scientists who have developed a hybrid solar energy converter that generates electricity and steam with high efficiency and low cost. The work led by Matthew Escarra, associate professor of physics and engineering physics at Tulane, and Daniel Codd, associate professor of mechanical engineering at the University of San Diego, is the culmination of a U.S. Department of Energy ARPA-E project that began in 2014 with $3.3 million in funding and involved years of prototype development at Tulane and field testing in San Diego. The research is detailed this month in the science journal Cell Reports Physical Science. Researchers from San Diego State University, Boeing-Spectrolab and Otherlab were also part of the project. “We are pleased to have demonstrated high performance field operation of our solar converter and look forward to its ongoing commercial development.” — Matthew Escarra, associate professor of physics and engineering physics at Tulane “Thermal energy consumption is a huge piece of the global energy economy – much larger than electricity use. There has been a rising interest in solar combined heat and power systems to deliver both electricity and process heat for zero-net-energy and greenhouse-gas-free development,” said Escarra. The hybrid converter utilizes an approach that more fully captures the whole spectrum of sunlight. It generates electricity from high efficiency multi-junction solar cells that also redirect infrared rays of sunlight to a thermal receiver, which converts those rays to thermal energy. The thermal energy can be stored until needed and used to provide heat for a wide range of commercial and industrial uses, such as food processing, chemical production, water treatment, or enhanced oil recovery. The team reports that the system demonstrated 85.1 percent efficiency, delivered steam at up to 248°C, and is projected to have a system levelized cost of 3 cents per kilowatt hour. With follow-on funding from the Louisiana Board of Regents and Reactwell, a local commercialization partner, the team is continuing to refine the technology and move towards pilot-scale validation. “We are pleased to have demonstrated high performance field operation of our solar converter,” Escarra said, “and look forward to its ongoing commercial development.”
Researchers Invent High-Performance Hybrid Solar Energy Converter content media
1
0
7
StarckGate
Jul 07, 2020
In StarckGate News
The first 10 fuel cell electric trucks set to revolutionise the green hydrogen mobility ecosystem in Switzerland have been shipped from South Korea. Hyundai Motor Company on Sunday said the first 10 Hyundai XCIENT Fuel Cells, the world’s first mass-produced fuel cell heavy-duty truck, were on their way to Switzerland. South Korea’s largest automaker plans to ship a total of 50 XCIENT Fuel Cells to Switzerland this year, with handover to commercial fleet customers starting in September. Hyundai plans to roll out a total of 1,600 XCIENT Fuel Cell trucks by 2025, reflecting the company’s environmental commitment and technological prowess as it works toward reducing carbon emissions through zero emission solutions. “XCIENT Fuel Cell is a present-day reality, not as a mere future drawing board project. By putting this groundbreaking vehicle on the road now, Hyundai marks a significant milestone in the history of commercial vehicles and the development of hydrogen society,” said In Cheol Lee, Executive Vice-President and Head of Commercial Vehicle Division at Hyundai. “Building a comprehensive hydrogen ecosystem, where critical transportation needs are met by vehicles like XCIENT Fuel Cell, will lead to a paradigm shift that removes automobile emissions from the environmental equation.” “Having introduced the world’s first mass-produced fuel-cell electric passenger vehicle, the ix35, and the second-generation fuel cell electric vehicle, the NEXO, Hyundai is now leveraging decades of experience, world-leading fuel-cell technology, and mass-production capability to advance hydrogen in the commercial vehicle sector with the XCIENT Fuel Cell,” he added. XCIENT Fuel Cell XCIENT is powered by a 190 kW hydrogen fuel cell system with dual 95 kW fuel cell stacks. Seven large hydrogen tanks offer a combined storage capacity of around 32.09kg of hydrogen. The driving range per charge for XCIENT Fuel Cell is about 400km, which was developed with an optimal balance between the specific requirements from the potential commercial fleet customers and the charging infrastructure in Switzerland. Refuelling time for each truck takes approximately 8~20 minutes. Fuel cell technology is particularly well-suited to commercial shipping and logistics due to long ranges and short refuelling times. The dual-mounted fuel cell system provides enough energy to drive the heavy-duty trucks up and down the mountainous terrain in the region. Green Hydrogen Ecosystem In 2019, Hyundai formed Hyundai Hydrogen Mobility (HHM), a joint venture with Swiss company H2 Energy, which will lease the trucks to commercial truck operators on a pay-per-use basis, meaning there is no initial investment for the commercial fleet customers. Hyundai said it chose Switzerland as the starting point for its business venture for various reasons, one being the Swiss LSVA road tax on commercial vehicles, which does not apply for zero emission trucks. That nearly equalises the hauling costs per kilometre of the fuel cell truck compared to a regular diesel truck. Hyundai’s business case involves using purely clean hydrogen generated from hydropower. To truly reduce carbon emissions, all of the trucks need to run on only green hydrogen. Switzerland is the country with one of the highest shares of hydropower globally, and can therefore deliver sufficient green energy for the production of hydrogen. Once the project is underway in Switzerland, Hyundai plans to expand it to other European countries as well.
Hyundai ships first hydrogen trucks to Switzerland
 content media
1
0
5
StarckGate
Jun 17, 2020
In StarckGate News
The next major revolution in computer chip technology is now a step closer to reality. Researchers have shown that carbon nanotube transistors can be made rapidly in commercial facilities, with the same equipment used to manufacture traditional silicon-based transistors – the backbone of today's computing industry. Carbon nanotube field-effect transistors (CNFETs) are more energy-efficient than silicon field-effect transistors and could be used to build a new generation of three-dimensional microprocessors. But until now, these devices have been mostly restricted to academic laboratories with only small numbers produced. However, in a new study this month – published in the journal Nature Electronics – scientists have demonstrated how CNFETs can be fabricated in large quantities on 200-millimetre wafers: the industry standard for computer chip design. The CNFETs were created in a commercial silicon manufacturing facility and a semiconductor foundry in the United States. Having analysed the deposition technique used to make the CNFETs, a team at the Massachusetts Institute of Technology (MIT) developed a way of speeding up the fabrication process by more than 1,100 times compared to previous methods, while also reducing the cost. Their technique deposited the carbon nanotubes edge to edge on wafers, with CFNET arrays of 14,400 by 14,400 distributed across multiple wafers. Max Shulaker, an MIT assistant professor of electrical engineering and computer science, who has been designing CNFETs since his PhD days, says the new study represents "a giant step forward, to make that leap into production-level facilities." Bridging the gap between lab and industry is something that researchers "don't often get a chance to do," he added. "But it's an important litmus test for emerging technologies." For decades, improvements in silicon-based transistor manufacturing have brought down prices and increased energy efficiency in computing. Concerns are mounting that this trend may be nearing its end, however, as increasing numbers of transistors packed into integrated circuits do not appear to be increasing energy efficiency at historic rates. CNFETs are an attractive alternative technology because they are "around an order of magnitude more energy efficient" than silicon-based transistors, says Shulaker. While silicon-based transistors are typically made at temperatures of 450 to 500 degrees Celsius, CNFETs can be manufactured at near-room temperatures. "This means that you can actually build layers of circuits right on top of previously fabricated layers of circuits, to create a 3D chip," Shulaker explains. "You can't do this with silicon-based technology, because it would melt the layers underneath." A 3D computer chip, which might combine logic and memory functions, is projected to "beat the performance of a state-of-the-art 2D chip made from silicon by orders of magnitude," he says. One of the most effective ways to build CFNETs in the lab is a method for depositing nanotubes called incubation – illustrated below – where a wafer is submerged in a bath of nanotubes until the nanotubes stick to the wafer's surface. The performance of the CNFET depends in large part on the deposition process, explains co-author Mindy Bishop, a PhD student in the Harvard-MIT Health Sciences and Technology program. This affects both the number of carbon nanotubes on the surface of the wafer and their orientation. They are "either stuck onto the wafer in random orientations like cooked spaghetti, or all aligned in the same direction like uncooked spaghetti still in the package." Aligning the nanotubes perfectly in a CNFET leads to ideal performance, but alignment is difficult to obtain, says Bishop: "It's really hard to lay down billions of tiny 1-nanometre diameter nanotubes in a perfect orientation across a large 200-millimetre wafer. To put these length scales into context, it's like trying to cover the entire state of New Hampshire in perfectly oriented, dry spaghetti." While the incubation method employed by the MIT team is unable to perfectly align every nanotube (perhaps a breakthrough in future years may achieve this?), their experiments showed that it delivers sufficiently high performance for a CNFET to outperform a traditional silicon-based transistor. Furthermore, careful observations revealed how to alter the process to make it more viable for large-scale commercial production. For instance, Bishop's team found that "dry cycling", a method of intermittently drying out the submerged wafer, could drastically reduce the incubation time – from 48 hours to 150 seconds. Another new method called artificial concentration through evaporation (ACE) deposited small amounts of nanotube solution on a wafer, instead of submerging the wafer in a tank. The slow evaporation of the solution increased the overall density of nanotubes on the wafer. The researchers worked with Analog Devices, a commercial silicon manufacturing facility, and SkyWater Technology, a semiconductor foundry, to fabricate CNFETs using the improved methods. They were able to use the same equipment that the two facilities use to make silicon-based wafers, while also ensuring that the nanotube solutions met strict chemical and contaminant requirements of the facilities. The next steps, already underway, will be to build different types of integrated circuits out of CNFETs in an industrial setting and explore some of the new functions that a 3D chip could offer, adds Shulaker. "The next goal is for this to transition from being academically interesting to something that will be used by folks, and I think this is a very important step in this direction," he concludes.
Carbon nanotube transistors make the leap from lab to factory floor content media
1
0
38
StarckGate
May 12, 2020
In StarckGate News
Facebook AI has built and open-sourced BlenderBot, the largest-ever open-domain chatbot. It outperforms others in terms of engagement and also feels more human, according to human evaluators. The culmination of years of research in conversational AI, this is the first chatbot to blend a diverse set of conversational skills — including empathy, knowledge, and personality — together in one system. We achieved this milestone through a new chatbot recipe that includes improved decoding techniques, novel blending of skills, and a model with 9.4 billion parameters, which is 3.6x more than the largest existing system. Today we’re releasing the complete model, code, and evaluation set-up, so that other AI researchers will be able to reproduce this work and continue to advance conversational AI research. Conversation is an art that we practice every day — when we’re debating food options, deciding the best movie to watch after dinner, or just discussing current events to broaden our worldview. For decades, AI researchers have been working on building an AI system that can converse as well as humans can: asking and answering a wide range of questions, displaying knowledge, and being empathetic, personable, engaging, serious, or fun, as circumstances dictate. So far, systems have excelled primarily at specialized, preprogrammed tasks, like booking a flight. But truly intelligent, human-level AI systems must effortlessly understand the broader context of the conversation and how specific topics relate to each other. As the culmination of years of our research, we’re announcing that we’ve built and open-sourced">open-sourced BlenderBot, the largest-ever open-domain chatbot. It outperforms others in terms of engagement and also feels more human, according to human evaluators. This is the first time a chatbot has learned to blend several conversational skills — including the ability to assume a persona, discuss nearly any topic, and show empathy — in natural, 14-turn conversation flows. Today we’re sharing new details of the key ingredients that we used to create our new chatbot. Some of the best current systems have made progress by training high-capacity neural models with millions or billions of parameters using huge text corpora sourced from the web. Our new recipe incorporates not just large-scale neural models, with up to 9.4 billion parameters — or 3.6x more than the largest existing system — but also equally important techniques for blending skills and detailed generation. Chatbot recipe: Scale, blending skills, and generation strategies Scale Common to other natural language processing research today, the first step in creating our chatbot was large-scale training. We pretrained large (up to 9.4 billion) Transformer neural networks on large amounts of conversational data. We used previously available public domain conversations that involved 1.5 billion training examples of extracted conversations. Our neural networks are too large to fit on a single device, so we utilized techniques such as column-wise model parallelism, which allows us to split the neural network into smaller, more manageable pieces while maintaining maximum efficiency. Such careful organization of our neural networks enabled us to handle larger networks than we could previously while maintaining the high efficiency needed to scale to terabyte-size data sets. Blending skills While learning at scale is important, it’s not the only ingredient necessary for creating the best possible conversationalist. Learning to mimic the average conversation in large-scale public training sets doesn’t necessarily mean that the agent will learn the traits of the best conversationalists. In fact, if not done carefully, it can make the model imitate poor or even toxic behavior. We recently introduced a novel task called Blended Skill Talk (BST) for training and evaluating these desirable skills. BST consists of the following skills, leveraging our previous research: Engaging use of personality (PersonaChat) Engaging use of knowledge (Wizard of Wikipedia) Display of empathy (Empathetic Dialogues) Ability to blend all three seamlessly (BST) Blending these skills is a difficult challenge because systems must be able to switch between different tasks when appropriate, like adjusting tone if a person changes from joking to serious. Our new BST data set provides a way to build systems that blend and exhibit these behaviors. We found that fine-tuning the model with BST has a dramatic effect on human evaluations of the bot’s conversational ability. Generation strategies Training neural models is typically done by minimizing perplexity, which measures how well models can predict and generate the next word. However, to make sure conversational agents don’t repeat themselves or display other shortcomings, researchers typically use a number of possible generation strategies after the model is trained, including beam search, next token sampling, and n-gram blocking. We find that the length of the agent’s utterances is important in achieving better results with human evaluators. If they’re too short, the responses are dull and communicate a lack of interest; if they’re too long, the chatbot seems to waffle and not listen. Contrary to recent research, which finds that sampling outperforms beam search, we show that a careful choice of search hyperparameters can give strong results by controlling this trade-off. In particular, tuning the minimum beam length gives important control over the “dull versus spicy” spectrum of responses. Putting our recipe to the test To evaluate our model, we benchmarked its performance against Google’s latest Meena chatbot through pairwise human evaluations. Since their model has not been released, we used the roughly 100 publicly released and randomized logs for this evaluation. Using the ACUTE-Eval method, human evaluators were shown a series of dialogues between humans paired with each respective chatbot. They were asked: “Who would you prefer to talk to for a long conversation?” (showing engagingness) “Which speaker sounds more human?” (showing humanness) When presented with chats showing Meena in action and chats showing BlenderBot in action, 67 percent of the evaluators said that our model sounds more human, and 75 percent said that they would rather have a long conversation with BlenderBot than with Meena. Further analysis via human evaluation underscored the importance of both blending skills and choosing a generation strategy that produces nonrepetitive, detailed responses. In an A/B comparison between human-to-human and human-to-BlenderBot conversations to measure engagement, models fine-tuned with BST tasks were preferred 49 percent of the time to humans, while models trained only on public domain conversations were preferred just 36 percent of the time. Decoding strategies, such as beam blocking and controlling for the minimum beam length, also had a large impact on results. After we removed the minimum beam length constraint, the model’s responses were roughly half the length and the performance of our BST models went down, from 49 percent to 21 percent. These results show that while scaling models is important, there are other, equally important parts of the chatbot recipe. In this graph, we show how often human evaluators preferred our chatbots to human-to-human chats over time. Since 2018, we’ve improved model performance in this evaluation --- from 23% in 2018 to 49% today. Over the past few years, w've doubled the performance of our chatbot models through various key model improvements, like Specificity Control, Poly-Encoders, and the recipe described in this blog post. Our latest model’s performance is nearly equal to human-level quality in this specific test setup. This would suggest that we have achieved near human-level performance for this type of evaluation; however, our chatbot still has many weaknesses relative to humans, and finding an evaluation method that better exposes these weaknesses is an open problem and part of our future research agenda. Looking ahead We’re excited about the progress we’ve made in improving open-domain chatbots. However, we are still far from achieving human-level intelligence in dialogue systems. Though it’s rare, our best models still make mistakes, like contradiction or repetition, and can “hallucinate” knowledge, as is seen in other generative systems. Human evaluations are also generally conducted using relatively brief conversations, and we’d most likely find that sufficiently long conversations would make these issues more apparent. We’re currently exploring ways to further improve the conversational quality of our models in longer conversations with new architectures and different loss functions. We’re also focused on building stronger classifiers to filter out harmful language in dialogues. And we’ve seen preliminary success in studies to help mitigate gender bias in chatbots. True progress in the field depends on reproducibility --- the opportunity to build upon the best technology possible. We believe that releasing models is essential to enable full, reliable insights into their capabilities. That’s why we’ve made our state of the art open-domain chatbot publicly available through our dialogue research platform ParlAI. By open-sourcing code for fine-tuning and conducting automatic and human evaluations, we hope that the AI research community can build on this work and collectively push conversational AI forward. Written By Stephen Roller Research Engineer Jason Weston Research Scientist Emily Dinan Research Engineer
A state-of-the-art open source chatbot content media
1
0
10
StarckGate
May 11, 2020
In StarckGate News
Maxime Firth's business is complicated to manage, even in good times. His company, Onduline, turns recycled fibres into roofing material, after dousing them with bitumen to make them waterproof, and sells products in 100 countries. Its eight production plants span from Nizhny Novgorod in Russia and Penang in Malaysia, to Juiz de Fora in Brazil. Further complicating his supply chain, Mr Firth's business is strongly seasonal. People install roofs in the summer, so products are made from January to March, to sell from April to September. The big question for him is how much demand there will be from important markets like China and the US. "Instead of manufacturing something that you are forced to sell, it is better to know what the market wants to buy," he says. The impact of coronavirus makes it difficult for businessmen like him to make the right decisions. To manage demand Mr Firth's company used to work with "homemade" IT tools mainly based on Excel spreadsheets. But now he is using software accessible over the internet (also known as cloud-based) which can model his situation every week. It allows the firm to use the latest data to explore how demand might start returning in different markets. "In terms of profitability, and also production, it's changing every week," he says. 📷 Coronavirus "puts supply chain planning under the spotlight," says Frank Calderoni, chief executive of Anaplan, whose software Onduline has been using. Some companies have seen sales dry up: like Mr Firth's roofing material purchases, which he says are down 70%. But demand for some goods has rocketed, including groceries, books, coffee, and toys for children. Supply chain chaos could last at least another 18 months, and probably longer, says Len Pannett, president of the UK roundtable of the Council of Supply Chain Management Professionals. 📷Image copyrightGETTY IMAGESImage captionSupply chain disruption could last another 18 months experts say Businesses trying to get back to work may find their overseas suppliers are still in lockdown. The more information available about every firm in the chain, the better. "Being in touch with a customer's customer's customer, you can see ahead of time what's coming your way" and start finding alternative suppliers if you need to, Mr Pannett says. Most businesses had been monitoring supply chains, finance, and sales with different tools. Joining these silos into one cloud platform lets finance teams peek into supply chains and sales, and be more efficient with money, he says. And with margins tighter than ever, businesses will need to make better decisions. More accurate real-time information will help them do this, and keep better track of their decisions' effects, according to Mr Calderoni. Supply chains were already trickling onto the cloud and he says coronavirus will accelerate that move, with technologies like blockchain and artificial intelligence (AI) becoming commonplace. 📷Image copyrightAPM TERMINALSImage captionSusan Hunter looks after the day-to-day operations of Khalifa Bin Salman Port For the Gulf state of Bahrain - an island - all its ventilators, facemasks, medicines, and 99.5% of the goods in its market come through its only port. The outbreak forced the port to change its procedures, says Susan Hunter, who as head of APM Terminals Bahrain is in charge of Khalifa Bin Salman Port's day-to-day running. The port had to quickly arrange for lorry drivers to apply for gate passes, do security checks, and make payments online. It has also set up a critical cargo programme, to identify containers carrying medical supplies, to allow these to swiftly pass these through customs and put them where they can be accessed quickly. Ms Hunter would like to move all the administration to a blockchain system. "There's no resistance, just 'How are we going to make that happen?'" she says. "We're just a couple of steps away from being able to put a lot of our documents onto a blockchain platform, we are seeing the industry changing that way," she says. Blockchain keeps a record of transactions in a ledger, stored across a number of computers linked in a peer-to-peer network. This lets firms share information about a container just once, but everyone up and down the chain can see that information. It allows "the right person to have the right information at the right time, in a permissioned way," says Richard Stockley, blockchain executive at IBM Europe. 📷Image copyrightAPM TERMINALSImage captionBlockchain could make tracking goods simpler Blockchain has made headway in areas like tracing food through a supply chain. Walmart asked IBM to create a food tracing system based on blockchain technology. As an experiment, Walmart's chief executive pulled out a packet of mangoes, imagined they were toxic and asked how long it would take to find out where they came from, and where the other mangoes in that shipment were. Manually, it took six-and-a-half days to find the answer. But using blockchain "we've got that down to about two seconds," says Mr Stockley. The biggest challenge in introducing blockchain to supply chains is getting different organisations to collaborate. "Blockchain is a team sport," he quips. But Mr Stockley says blockchain can make supply chains "a lot more resilient, more transparent, and proactive," and will get much more attention as we emerge from coronavirus. Amazon has changed forever how quickly we expect products to arrive, and how visible their movements should be to us on the way, says Adam Compain, chief executive of San Francisco-based ClearMetal, an AI supply chains startup. But outside Jeff Bezos's company, big corporate supply chains are still pretty static. Typically, every six months, a company will look at how long it takes products to go from China to a warehouse and on to a store shelf, he says. 📷Image copyrightAPM TERMINALSImage captionCompanies would like to know exactly where their products are Getting more up-to-date information means making sense of thousands of pieces of information each day about where products are. Much of that information can be poor or conflicting. For example, a delivery company might tell you twice that the same shipment has been delivered, or is out for delivery. But machine learning algorithms can spot patterns in this messy data. Maybe the same delivery company always sends two messages, but the first is generally more accurate. AI is now much better than humans at spotting whether there's a storm brewing now that will delay your shipping container next week, Mr Pannett says. For thousands of businesses like Mr Firth's in France, the coming year will be tough. "Now we know until May we are okay," he says. After that, "we don't know if the customers will pay us." So each week his company, like many others, will make high-stakes decisions using a combination of luck and the best tools technology can offer. By Padraig Business reporter
1
0
7
StarckGate
Mar 08, 2020
In StarckGate News
The experimental train will be created by converting ScotRail carriages retired in December. It is likely to be tested on a heritage line in Scotland like the Bo’ness and Kinneil Railway, which has been used for trials of battery trains. Testing would then be switched to a main line. Conversion work will be done at train refurbisher Brodie Engineering in Kilmarnock. The technology involved will be developed by London-based Arcola Energy, with the University of St Andrews also involved in the Scottish Enterprise project. Arcola chief executive Dr Ben Todd said hydrogen could be used on non-electrified routes which were too long for battery-powered trains. He said: “It will be a small feasibility study, involving the refit of a former ScotRail class 314 electric train.” He said cheaper hydrogen production may be required to make it a viable fuel since the gas remains far more expensive than petrol and diesel. Reducing costs However, building hydrogen plants close to wind farms to exploit surplus energy generated overnight could significantly reduce the cost. Dr Todd said: “We need to make a major change to the energy system, with more power from renewable sources to the grid.” Hydrogen-powered trains have been carrying passengers in Germany since 2018, while other research projects have been launched in England. The Scottish Government has pledge to “decarbonise” passenger trains by 2035, five years ahead of the UK. Battery power Its Transport Scotland agency is examining the merits of both battery and hydrogen power following ministers’ commitment to trial hybrid self-powered trains. Hitachi has already offered to add batteries to one of its new class 385 electric trains which run ScotRail services on several lines between Edinburgh and Glasgow. It said they would have a 60-mile range. A Transport Scotland spokesperson said: “In line with our commitment to decarbonise the railway by 2035, hydrogen is one of the energy sources we are exploring as an alternative to diesel. “We are working with Scottish Enterprise and rail industry partners to see how this can be practically applied to a retired class 314 ScotRail train. Scottish Enterprise head of high value manufacturing David Leven said: “This rail innovation project will provide opportunities for Scotland’s manufacturing companies as well as support the delivery of our national ambitions for more and better jobs and a net zero carbon economy.” By Alastair Dalton
Hydrogen-powered train to be tested in Scotland as fuel of future

 content media
1
0
8
StarckGate
Feb 27, 2020
In StarckGate News
Here’s a list of promising drugs being tried on people infected with the virus. The new coronavirus that has sickened more than 70,000 people in central China is spreading, setting off outbreaks in South Korea, Iran, and Italy, and now threatens to move around the globe as a true pandemic. The US Centers for Disease Control today said that its spread in the US is inevitable too, although how many people will get the disease is unknown. “We are asking the American public to prepare for the expectation that this might be bad,” Nancy Messonnier, director of the National Center for Immunization and Respiratory Diseases, said today. If the disease known as Covid-19 does become a pandemic, one thing is for sure: billions of people will be hoping for a drug or vaccine. While there’s no proven treatment yet for the virus and the pneumonia it causes, there are more than 70 drugs or drug combinations potentially worth trying, according to the World Health Organization. Here are some of the most promising, fast-moving research projects. Virus blockers Although experimental, the injectable drug remdesivir, made by Gilead Sciences, is a broad-spectrum antiviral that Francis Collins, head of the National Institutes of Health, says he’s “optimistic” about. The drug forms a misshapen version of a nucleotide that the virus needs to build new copies of itself, thus preventing it from multiplying. The same type of strategy led to Gilead’s blockbuster hepatitis C drug. Remdesivir is a good prospect because it’s broadly active against viruses whose genetic material is made of RNA, like the coronavirus. It works well in mice and monkeys infected with MERS (a related germ), although it didn’t help very much when it was given to Ebola victims in Congo starting in 2018. In January, remdesivir was given to a 35-year-old man in Washington state who caught the coronavirus during a trip to China and who then recovered. To find out whether it really works, the NIH said today that it would run a study of remdesivir at the University of Nebraska Medical Center in Omaha, where some Americans with the disease are being cared for or are under quarantine. According to the agency, the study will be blinded: some subjects will get the drug and others a dummy injection, or placebo. The first person to join the trial is an American who was on the Diamond Princess cruise ship, the site of a major outbreak. Vaccines Probably the best defense in the long term, a conventional vaccine has the drawback that such drugs usually take three or four years to reach the market, at the quickest. That’s because it takes time to prove they protect people from infection and manufacture them in large amounts. As well, it’s not unusual for vaccines to simply fail, sending scientists back to the drawing board. Luckily, a bunch of prototype vaccines were developed against SARS, the coronavirus disease that killed more than 800 people starting in 2003. Though the vaccines were never needed after SARS stopped spreading, some of those approaches are being dusted off for the new virus. One company developing a coronavirus vaccine is Sanofi. Its approach is to manufacture proteins called antigens from the virus; these can be injected into the bloodstream, training people’s immune system to recognized the germ. Usually that kind of vaccine is made in chicken eggs, but that’s a big bottleneck. Millions of eggs aren’t easy to come by. Sanofi has developed other ways to make antigens inside insect cells. Faster vaccines Some companies are experimenting with new types of vaccines that involve injecting short strands of genetic material from the virus directly into people’s bodies. That way, their own cells make the viral antigens. Although such vaccines haven’t had major success in medicine yet, they are among the quickest varieties to create prototypes of. That became clear this week when one company, Moderna Therapeutics, said it had already shipped some doses of an RNA vaccine to the NIH. Those could be given to volunteers in a safety test starting as soon as April. “We have never rushed to intervene on a pandemic in this time frame before,” says Stephen Hoge, the company’s president. Plasma from survivors A person who gets infected with a virus but beats it has blood swarming with antibodies to the germ. It’s been shown that collecting blood plasma from survivors and infusing it into another person can sometimes be life-saving. While plasma is not certain to work, more than 27,000 people are already listed as recovered from coronavirus in China, so there could be an ample number of donors. Doctors in Shanghai are among those trying plasma infusions. HIV drugs To help patients in severe respiratory distress from the virus, doctors in China are prioritizing drugs they can get hold of, and these include several medications already approved for HIV. One hospital in Shanghai, for instance, tested a combination of the pills lopinavir and ritonavir in 52 patients. The company AbbVie markets this combination as Kaletra in the US. While they didn’t see any effect, more studies are planned or under way using other drugs, including Truvada, a once-daily pill taken by people who don’t have HIV but are at risk for getting infected through sex. Chloroquine According to some social-media posts, the cure for coronavirus is already known, and it’s the old anti-malaria drug chloroquine. Actually, that’s not proven. But studies of the cheap, well studied, and readily available compound are under way in China, with patients getting 400 milligrams a day for five days. Initial laboratory tests suggest the drug, discovered in 1934, may be highly effective. by Antonio Regalado Feb 25, 2020
What are the best coronavirus treatments? content media
1
0
5
StarckGate
Feb 21, 2020
In StarckGate News
The news: The European Union’s newly released white paper containing guidelines for regulating AI acknowledges the potential for artificial intelligence to “lead to breaches of fundamental rights,” such as bias, suppression of dissent, and lack of privacy. It suggests legal requirements such as: —Making sure AI is trained on representative data —Requiring companies to keep detailed documentation of how the AI was developed —Telling citizens when they are interacting with an AI —Requiring human oversight for AI systems The criticism: The new criteria are much weaker than the ones suggested in a version of the white paper leaked in January. That draft suggested a moratorium on facial recognition in public spaces for five years, while this one calls only for a “broad European debate” on facial recognition policy. Michael Veale, a digital policy lecturer at University College London, notes that the commission often takes more extreme positions in early drafts as a political tactic, so it’s not surprising that the official paper does not suggest a moratorium. However, he says it’s still disappointing because it comes on the heels of a similarly lackluster report from the High-Level Expert Group on Artificial Intelligence, which was considered “heavily captured by industry.”Meanwhile, the paper’s guidelines for AI apply only to what it deems “high-risk” technologies, says Frederike Kaltheuner, a tech policy fellow at Mozilla. “High-risk” can include certain industries, like health care, or certain types, like biometric surveillance. But the suggestions wouldn’t apply to advertising technology or consumer privacy, which Kaltheuner says can have big effects and which aren’t being addressed under GDPR.The white paper is only a set of guidelines. The European Commission will start drafting legislation based on these proposals and comments at the end of 2020.What else: The EU today also released a paper on “European data strategy” that suggests it wants to create a “single European data space”—meaning a European data giant that will challenge the big tech companies of Silicon Valley. By Angela Chen
New AI regulations for Europe weakened guidelines content media
1
0
7
StarckGate
Feb 17, 2020
In StarckGate News
IN BRIEF After five years’ work, an MIT team can now fabricate a transparent version of a silica aerogel, an ultralight material that blocks heat transfer. They have used their aerogel in a solar thermal collector to generate temperatures suitable for water and space heating and more—without using the expensive concentrators, special materials, and vacuum enclosures that have kept current solar thermal systems from being widely adopted. They have also demonstrated that inserting an aerogel into the gap in a double-pane window will make a product that’s both affordable and highly insulating. Finally, their work has generated guidelines that will help innovators design and fabricate aerogels with nanoscale structures tailored for high performance in other critical technologies. In recent decades, the search for high-performance thermal insulation for buildings has prompted manufacturers to turn to aerogels. Invented in the 1930s, these remarkable materials are translucent, ultraporous, lighter than a marshmallow, strong enough to support a brick, and an unparalleled barrier to heat flow, so ideal for keeping heat inside on a cold winter day and outside when summer temperatures soar. Five years ago, researchers led by Evelyn Wang, a professor and head of the Department of Mechanical Engineering, and Gang Chen, the Carl Richard Soderberg Professor in Power Engineering, set out to add one more property to that list. They aimed to make a silica aerogel that was truly transparent. “We started out trying to realize an optically transparent, thermally insulating aerogel for solar thermal systems,” says Wang. Incorporated into a solar thermal collector, a slab of aerogel would allow sunshine to come in unimpeded but prevent heat from coming back out—a key problem in today’s systems. And if the transparent aerogel were sufficiently clear, it could be incorporated into windows, where it would act as a good heat barrier but still allow occupants to see out. When the researchers started their work, even the best aerogels weren’t up to those tasks. “People had known for decades that aerogels are a good thermal insulator, but they hadn’t been able to make them very optically transparent,” says Lin Zhao PhD ’19 of mechanical engineering. “So in our work, we’ve been trying to understand exactly why they’re not very transparent and then how we can improve their transparency.” Aerogels: Opportunities and challenges The remarkable properties of a silica aerogel are the result of its nanoscale structure. To visualize that structure, think of holding a pile of small, clear particles in your hand. Imagine that the particles touch one another and slightly stick together, leaving gaps between them that are filled with air. Similarly, in a silica aerogel, clear, loosely connected nanoscale silica particles form a three-dimensional solid network within an overall structure that is mostly air. Because of all that air, a silica aerogel has an extremely low density—in fact, one of the lowest densities of any known bulk material—yet it’s solid and structurally strong, though brittle. If a silica aerogel is made of transparent particles and air, why isn’t it transparent? Because the light that enters doesn’t all pass straight through. It is diverted whenever it encounters an interface between a solid particle and the air surrounding it. The diagram below illustrates the process. When light enters the aerogel, some is absorbed inside it. Some—called direct transmittance—travels straight through. And some is redirected along the way by those interfaces. It can be scattered many times and in any direction, ultimately exiting the aerogel at an angle. If it exits from the surface through which it entered, it is called diffuse reflectance; if it exits from the other side, it is called diffuse transmittance. Light transmission through an aerogel Some incident light is absorbed within the aerogel or passes straight through and emerges from the other side—labeled above as “direct transmittance.” The remainder can be redirected every time it encounters a particle-pore interface, which means it can be scattered many times in multiple directions before it emerges as “diffuse reflectance” or “diffuse transmittance,” depending on which surface the light exits the aerogel from. Making a transparent aerogel requires maximizing all the light that is transmitted, both direct and diffuse. Making it clear enough to be used in a window requires minimizing the diffuse portion of the total. Credit: Lin Zhao, MIT To make an aerogel for a solar thermal system, the researchers needed to maximize the total transmittance: the direct plus the diffuse components. And to make an aerogel for a window, they needed to maximize the total transmittance and simultaneously minimize the fraction of the total that is diffuse light. “Minimizing the diffuse light is critical because it’ll make the window look cloudy,” says Zhao. “Our eyes are very sensitive to any imperfection in a transparent material.” Developing a model The sizes of the nanoparticles and the pores between them have a direct impact on the fate of light passing through an aerogel. But figuring out that interaction by trial and error would require synthesizing and characterizing too many samples to be practical. “People haven’t been able to systematically understand the relationship between the structure and the performance,” says Zhao. “So we needed to develop a model that would connect the two.” To begin, Zhao turned to the radiative transport equation, which describes mathematically how the propagation of light (radiation) through a medium is affected by absorption and scattering. It is generally used for calculating the transfer of light through the atmospheres of Earth and other planets. As far as Wang knows, it has not been fully explored for the aerogel problem. Both scattering and absorption can reduce the amount of light transmitted through an aerogel, and light can be scattered multiple times. To account for those effects, the model decouples the two phenomena and quantifies them separately—and for each wavelength of light. Based on the sizes of the silica particles and the density of the sample (an indicator of total pore volume), the model calculates light intensity within an aerogel layer by determining its absorption and scattering behavior using predictions from electromagnetic theory. Using those results, it calculates how much of the incoming light passes directly through the sample and how much of it is scattered along the way and comes out diffuse. The next task was to validate the model by comparing its theoretical predictions with experimental results. Synthesizing aerogels Working in parallel, graduate student Elise Strobach of mechanical engineering had been learning how best to synthe­-size aerogel samples—both to guide development of the model and ultimately to validate it. In the process, she produced new insights on how to synthesize an aerogel with a specific desired structure. Her procedure starts with a common form of silicon called silane, which chemically reacts with water to form an aerogel. During that reaction, tiny nucleation sites occur where particles begin to form. How fast they build up determines the end structure. To control the reaction, she adds a catalyst, ammonia. By carefully selecting the ammonia-to-silane ratio, she gets the silica particles to grow quickly at first and then abruptly stop growing when the precursor materials are gone—a means of producing particles that are small and uniform. She also adds a solvent, methanol, to dilute the mixture and control the density of the nucleation sites, thus the pores between the particles. The reaction between the silane and water forms a gel containing a solid nanostructure with interior pores filled with the solvent. To dry the wet gel, Strobach needs to get the solvent out of the pores and replace it with air—without crushing the delicate structure. She puts the aerogel into the pressure chamber of a critical point dryer and floods liquid CO2 into the chamber. The liquid CO2 flushes out the solvent and takes its place inside the pores. She then slowly raises the temperature and pressure inside the chamber until the liquid CO2 transforms to its supercritical state, where the liquid and gas phases can no longer be differentiated. Slowly venting the chamber releases the CO2 and leaves the aerogel behind, now filled with air. She then subjects the sample to 24 hours of annealing—a standard heat-treatment process—which slightly reduces scatter without sacrificing the strong thermal insulating behavior. Even with the 24 hours of annealing, her novel procedure shortens the required aerogel synthesis time from several weeks to less than four days. Validating and using the model To validate the model, Strobach fabricated samples with carefully controlled thicknesses, densities, and pore and particle sizes—as determined by small-angle X-ray scattering—and used a standard spectrophotometer to measure the total and diffuse transmittance. The data confirmed that, based on measured physical properties of an aerogel sample, the model could calculate total transmittance of light as well as a measure of clarity called haze, defined as the fraction of total transmittance that is made up of diffuse light. The exercise confirmed simplifying assumptions made by Zhao in developing the model. Also, it showed that the radiative properties are independent of sample geometry, so his model can simulate light transport in aerogels of any shape. And it can be applied not just to aerogels but to any porous materials. Wang notes what she considers the most important insight from the modeling and experimental results: “Overall, we determined that the key to getting high transparency and minimal haze—without reducing thermal insulating capability—is to have particles and pores that are really small and uniform in size,” she says. One analysis demonstrates the change in behavior that can come with a small change in particle size. Many applications call for using a thicker piece of transparent aerogel to better block heat transfer. But increasing thickness may decrease transparency. The figures below show total transmittance (top) and haze (bottom) in aerogel samples of increasing thickness and fixed density. The curves represent model results for samples with different particle sizes. As thickness increases, the samples with particles of 6 nanometer (nm) and 9 nm radius quickly do worse on both transmittance and haze. In contrast, the performance of the samples with particles of 3 nm radius remains essentially unchanged. As long as particle size is small, increasing thickness to achieve greater thermal insulation will not significantly decrease total transmittance or increase haze. Effects of sample thickness on performance These figures show total transmittance (top) and haze (bottom) in aerogel samples as sample thickness increases. (Density in all samples is 200 kilograms per cubic meter.) The curves show results assuming nanoparticles with a mean particle radius of 3 nanometers (nm, black), 6 nm (red), and 9 nm (blue). As thickness increases, samples made with 6 nm and 9 nm particles show a decrease in total transmittance and an increase in haze. In contrast, with the 3 nm particles, increasing thickness to increase thermal insulation has little effect on total transmittance or haze. Comparing aerogels from MIT and elsewhere How much difference does their approach make? The figure below shows total transmittance and haze from three MIT samples (with different thicknesses) and from nine state-of-the-art silica aerogels, which typically have particles and pores that are as large as 10 nm and vary widely in size, which gives most aerogels a slightly blue tint, notes Wang. In the figure, the ideal transparent aerogel—one with 0% haze and 100% total transmittance—would appear in the bottom right corner. Only the MIT aerogel samples fall in that vicinity. The green bar represents common glass. The MIT samples have significantly better optical properties, with haze about the same and transmittance even greater than glass. “Our aerogels are more transparent than glass because they don’t reflect—they don’t have that glare spot where the glass catches the light and reflects to you,” says Strobach. To Lin, a main contribution of their work is the development of general guidelines for material design, as demonstrated by the figure below. Aided by such a “design map,” users can tailor an aerogel for a particular application. Based on the contour plots, they can determine the combinations of controllable aerogel properties—namely, density and particle size—needed to achieve a targeted haze and transmittance outcome for many applications. Aerogels in solar thermal collectors The researchers have already demonstrated the value of their new aerogels for solar thermal energy conversion systems, which convert sunlight into thermal energy by absorbing radiation and transforming it into heat. Current solar thermal systems can produce thermal energy at so-called intermediate temperatures—between 120°C and 220°C—which can be used for water and space heating, steam generation, industrial processes, and more. Indeed, in 2016, U.S. consumption of thermal energy exceeded the total electricity generation from all renewable sources. However, state-of-the-art solar thermal systems rely on expensive optical systems to concentrate the incoming sunlight, specially designed surfaces to absorb radiation and retain heat, and costly and difficult-to-maintain vacuum enclosures to keep that heat from escaping. To date, the costs of those components have limited market adoption. Zhao and his colleagues thought that using a transparent aerogel layer might solve those problems. Placed above the absorber, it could let through incident solar radiation and then prevent the heat from escaping. So it would essentially replicate the natural greenhouse effect that’s causing global warming—but to an extreme degree, on a small scale, and with a positive outcome. To try it out, the researchers designed an aerogel-based solar thermal receiver (see photo below). The device consists of a nearly “blackbody” absorber (a thin copper sheet coated with black paint that absorbs all radiant energy that falls on it), and above it a stack of optimized, low-scattering silica aerogel blocks, which efficiently transmit sunlight and suppress conduction, convection, and radiation heat losses simultaneously. The nanostructure of the aerogel is tailored to maximize its optical trans­parency while maintaining its ultralow thermal conductivity. With the aerogel present, there is no need for expensive optics, surfaces, or vacuum enclosures. To Zhao, the performance already demonstrated by the artificial greenhouse effect opens up what he calls “an exciting pathway to the promotion of solar thermal energy utilization.” Already, he and his colleagues have demonstrated that it can convert water to steam that is greater than 120°C. In collaboration with researchers at IIT Bombay, they are now exploring possible process steam applications in India and performing field tests of a low-cost, completely passive solar autoclave for sterilizing medical equipment in rural communities. Windows and more Strobach has been pursuing another promising application for the transparent aerogel—in windows. “In trying to make more transparent aerogels, we hit a regime in our fabrication process where we could make things smaller, but it didn’t result in a significant change in the transparency,” she says. “But it did make a significant change in the clarity,” a key feature for a window. The availability of an affordable, thermally insulating window would have several impacts, says Strobach. Every winter, windows in the United States lose enough energy to power over 50 million homes. That wasted energy costs the economy more than $32 billion a year and generates about 350 million tons of CO2—more than is emitted by 76 million cars. Consumers can choose high-efficiency triple-pane windows, but they’re so expensive that they’re not widely used. Analyses by Strobach and her colleagues showed that replacing the air gap in a conventional double-pane window with an aerogel pane could be the answer. The result could be a double-pane window that is 40% more insulating than traditional ones and 85% as insulating as today’s triple-pane windows—at less than half the price. Better still, the technology could be adopted quickly. The aerogel pane is designed to fit within the current two-pane manufacturing process that’s ubiquitous across the industry, so it could be manufactured at low cost on existing production lines with only minor changes. Guided by Zhao’s model, the researchers are continuing to improve the performance of their aerogels, with a special focus on increasing clarity while maintaining transparency and thermal insulation. In addition, they are considering other traditional low-cost systems that would—like the solar thermal and window technologies—benefit from sliding in an optimized aerogel to create a high-performance heat barrier that lets in abundant sunlight.
Transparent aerogels for solar devices, windows content media
1
0
63
StarckGate
Feb 17, 2020
In StarckGate News
For the first time, MIT researchers have enabled a soft robotic arm to understand its configuration in 3D space, by leveraging only motion and position data from its own “sensorized” skin.Soft robots constructed from highly compliant materials, similar to those found in living organisms, are being championed as safer, and more adaptable, resilient, and bioinspired alternatives to traditional rigid robots. But giving autonomous control to these deformable robots is a monumental task because they can move in a virtually infinite number of directions at any given moment. That makes it difficult to train planning and control models that drive automation. Traditional methods to achieve autonomous control use large systems of multiple motion-capture cameras that provide the robots feedback about 3D movement and positions. But those are impractical for soft robots in real-world applications.In a paper being published in the journal IEEE Robotics and Automation Letters, the researchers describe a system of soft sensors that cover a robot’s body to provide “proprioception” — meaning awareness of motion and position of its body. That feedback runs into a novel deep-learning model that sifts through the noise and captures clear signals to estimate the robot’s 3D configuration. The researchers validated their system on a soft robotic arm resembling an elephant trunk, that can predict its own position as it autonomously swings around and extends.The sensors can be fabricated using off-the-shelf materials, meaning any lab can develop their own systems, says Ryan Truby, a postdoc in the MIT Computer Science and Artificial Laboratory (CSAIL) who is co-first author on the paper along with CSAIL postdoc Cosimo Della Santina. “We’re sensorizing soft robots to get feedback for control from sensors, not vision systems, using a very easy, rapid method for fabrication,” he says. “We want to use these soft robotic trunks, for instance, to orient and control themselves automatically, to pick things up and interact with the world. This is a first step toward that type of more sophisticated automated control.”One future aim is to help make artificial limbs that can more dexterously handle and manipulate objects in the environment. “Think of your own body: You can close your eyes and reconstruct the world based on feedback from your skin,” says co-author Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “We want to design those same capabilities for soft robots.”Shaping soft sensorsA longtime goal in soft robotics has been fully integrated body sensors. Traditional rigid sensors detract from a soft robot body’s natural compliance, complicate its design and fabrication, and can cause various mechanical failures. Soft-material-based sensors are a more suitable alternative, but require specialized materials and methods for their design, making them difficult for many robotics labs to fabricate and integrate in soft robots.While working in his CSAIL lab one day looking for inspiration for sensor materials, Truby made an interesting connection. “I found these sheets of conductive materials used for electromagnetic interference shielding, that you can buy anywhere in rolls,” he says. These materials have “piezoresistive” properties, meaning they change in electrical resistance when strained. Truby realized they could make effective soft sensors if they were placed on certain spots on the trunk. As the sensor deforms in response to the trunk’s stretching and compressing, its electrical resistance is converted to a specific output voltage. The voltage is then used as a signal correlating to that movement.But the material didn’t stretch much, which would limit its use for soft robotics. Inspired by kirigami — a variation of origami that includes making cuts in a material — Truby designed and laser-cut rectangular strips of conductive silicone sheets into various patterns, such as rows of tiny holes or crisscrossing slices like a chain link fence. That made them far more flexible, stretchable, “and beautiful to look at,” Truby says. The researchers’ robotic trunk comprises three segments, each with four fluidic actuators (12 total) used to move the arm. They fused one sensor over each segment, with each sensor covering and gathering data from one embedded actuator in the soft robot. They used “plasma bonding,” a technique that energizes a surface of a material to make it bond to another material. It takes roughly a couple hours to shape dozens of sensors that can be bonded to the soft robots using a handheld plasma-bonding device. As hypothesized, the sensors did capture the trunk’s general movement. But they were really noisy. “Essentially, they’re nonideal sensors in many ways,” Truby says. “But that’s just a common fact of making sensors from soft conductive materials. Higher-performing and more reliable sensors require specialized tools that most robotics labs do not have.”To estimate the soft robot’s configuration using only the sensors, the researchers built a deep neural network to do most of the heavy lifting, by sifting through the noise to capture meaningful feedback signals. The researchers developed a new model to kinematically describe the soft robot’s shape that vastly reduces the number of variables needed for their model to process.In experiments, the researchers had the trunk swing around and extend itself in random configurations over approximately an hour and a half. They used the traditional motion-capture system for ground truth data. In training, the model analyzed data from its sensors to predict a configuration, and compared its predictions to that ground truth data which was being collected simultaneously. In doing so, the model “learns” to map signal patterns from its sensors to real-world configurations. Results indicated, that for certain and steadier configurations, the robot’s estimated shape matched the ground truth.Next, the researchers aim to explore new sensor designs for improved sensitivity and to develop new models and deep-learning methods to reduce the required training for every new soft robot. They also hope to refine the system to better capture the robot’s full dynamic motions. Currently, the neural network and sensor skin are not sensitive to capture subtle motions or dynamic movements. But, for now, this is an important first step for learning-based approaches to soft robotic control, Truby says: “Like our soft robots, living systems don’t have to be totally precise. Humans are not precise machines, compared to our rigid robotic counterparts, and we do just fine.”
“Sensorized” skin helps soft robots find their bearings content media
8
0
15
StarckGate
Feb 10, 2020
In StarckGate News
A 3D printing system that controls the behavior of live bacteria could someday enable medical devices with therapeutic agents built in. A method for printing 3D objects that can control living organisms in predictable ways has been developed by an interdisciplinary team of researchers at MIT and elsewhere. The technique may lead to 3D printing of biomedical tools, such as customized braces, that incorporate living cells to produce therapeutic compunds such as painkillers or topical treatments, the researchers say.The new development was led by MIT Media Lab Associate Professor Neri Oxman and graduate students Rachel Soo Hoo Smith, Christoph Bader, and Sunanda Sharma, along with six others at MIT and at Harvard University’s Wyss Institute and Dana-Farber Cancer Institute. The system is described in a paper recently published in the journal Advanced Functional Materials.“We call them hybrid living materials, or HLMs,” Smith says. For their initial proof-of-concept experiments, the team precisely incorporated various chemicals into the 3D printing process. These chemicals act as signals to activate certain responses in biologically engineered microbes, which are spray-coated onto the printed object. Once added, the microbes display specific colors or fluorescence in response to the chemical signals.In their study, the team describes the appearance of these colored patterns in a variety of printed objects, which they say demonstrates the successful incorporation of the living cells into the surface of the 3D-printed material, and the cells’ activation in response to the selectively placed chemicals. The objective is to make a robust design tool for producing objects and devices incorporating living biological elements, made in a way that is as predictable and scalable as other industrial manufacturing processes. The team uses a multistep process to produce their hybrid living materials. First, they use a commercially available multimaterial inkjet-based 3D printer, and customized recipes for the combinations of resins and chemical signals used for printing. For example, they found that one type of resin, normally used just to produce a temporary support for overhanging parts of a printed structure and then dissolved away after printing, could produce useful results by being mixed in with the structural resin material. The parts of the structure that incorporate this support material become absorbent and are able to retain the chemical signals that control the behavior of the living organisms. Finally, the living layer is added: a surface coating of hydrogel — a gelatinous material composed mostly of water but providing a stable and durable lattice structure — is infused with biologically engineered bacteria and spray-coated onto the object. “We can define very specific shapes and distributions of the hybrid living materials and the biosynthesized products, whether they be colors or therapeutic agents, within the printed shapes,” Smith says. Some of these initial test shapes were made as silver-dollar-sized disks, and others in the form of colorful face masks, with the colors provided by the living bacteria within their structure. The colors take several hours to develop as the bacteria grow, and then remain stable once they are in place. “There are exciting practical applications with this approach, since designers are now able to control and pattern the growth of living systems through a computational algorithm,” Oxman says. “Combining computational design, additive manufacturing, and synthetic biology, the HLM platform points toward the far-reaching impact these technologies may have across seemingly disparate fields, ‘enlivening’ design and the object space.” The printing platform the team used allows the material properties of the printed object to be varied precisely and continuously between different parts of the structure, with some sections stiffer and others more flexible, and some more absorbent and others liquid-repellent. Such variations could be useful in the design of biomedical devices that can provide strength and support while also being soft and pliable to provide comfort in places where they are in contact with the body. The team included specialists in biology, bioengineering, and computer science to come up with a system that yields predictable patterning of the biological behavior across the printed object, despite the effects of factors such as diffusion of chemicals through the material. Through computer modeling of these effects, the researchers produced software that they say offers levels of precision comparable to the computer-assisted design (CAD) systems used for traditional 3D printing systems. The multiresin 3D printing platform can use anywhere from three to seven different resins with different properties, mixed in any proportions. In combination with synthetic biological engineering, this makes it possible to design objects with biological surfaces that can be programmed to respond in specific ways to particular stimuli such as light or temperature or chemical signals, in ways that are reproducible yet completely customizable, and that can be produced on demand, the researchers say. “In the future, the pigments included in the masks can be replaced with useful chemical substances for human augmentation such as vitamins, antibodies or antimicrobial drugs,” Oxman says. “Imagine, for example, a wearable interface designed to guide ad-hoc antibiotic formation customized to fit the genetic makeup of its user. Or, consider smart packaging that can detect contamination, or environmentally responsive architectural skins that can respond and adapt — in real-time — to environmental cues.” In their tests, the team used genetically modified E. coli bacteria, because these grow rapidly and are widely used and studied, but in principle other organisms could be used as well, the researchers say. The team included Dominik Kolb, Tzu-Chieh Tang, Christopher Voigt, and Felix Moser at MIT; Ahmed Hosny at the Dana-Farber Cancer Institute of Harvard Medical School; and James Weaver at Harvard's Wyss Institute. It was supported by the Robert Wood Johnson Foundation, Gettylab, the DARPA Engineered Living Materials agreement, and a National Security Science and Engineering Faculty Fellowship. David L. Chandler | MIT News Office January 23, 2020
Printing objects that can incorporate living organisms content media
7
0
25
StarckGate
Feb 06, 2020
In StarckGate News
Canada is pitching itself to the world as a high-tech hotbed Since coming into office in 2015, prime minister Justin Trudeau has injected money to revitalise research and innovation, following ten years of cuts to the science budget. Now the country is riding high in key technologies Justin Trudeau’s government has been praised for its focus on research and innovation since coming to office in November 2015, boosting funding for research by C$9.4 billion, launching five innovation superclusters for R&D in artificial intelligence (AI) and other fields, and reinstating the chief scientific adviser position.Last year’s re-election of the left-leaning Liberal Party was cheered by scientists, who feared a win for the Conservatives would undo the progress made over the past four years. Trudeau’s predecessor, Conservative Stephen Harper, hacked science budgets when he came to office in 2006. The world’s tenth-largest economy, Canada has since revitalised its research policy. Warmer embrace with Europe and StarckGate. The government is opening the country up to more foreign collaboration with the launch of the New Frontiers in Research Fund. The programme, which could reach C$130 million in the 2022-23 fiscal year, is to fund inter-disciplinary research that is “out of the box” with a strong international dimension.Canada is also pursuing greater research collaboration with the EU as a natural follow-on to the Canada-EU trade agreement, and a counterbalance to increasingly difficult relations with the US under president Donald Trump. A relationship that once seemed unshakable now appears vulnerable. Canada is among eight countries with which the European Commission has said it would like to discuss associate membership in the proposed seven-year, €94.1 billion Horizon Europe programme – and, along with Japan, Canada appears most keen to take up the offer. Already, Ottawa funds many European researchers in projects with Canadian researchers, and it indirectly subsidises Canadians participating in 66 Horizon 2020 research projects. StarckGate being one of these project leaders that will glue this process together. Universities in demand Incoming foreign students to Canadian universities are undoubtedly there to take advantage of the relatively low tuition fees and cheaper health insurance than across the border in the US.Canada’s international student population grew by 92 per cent from 2008 to 2015, reaching more than 350,000, according to the Canadian Bureau for International Education.Ease of immigration might be another compelling reason, but the quality of education is a draw too – with 26 universities on the QS University Ranking, and three scoring in the top 100.The highest ranked institution is the University of Toronto, number 29 in the 2020 ranking, one place down from the previous year. Among its alumni are five prime ministers and 10 Nobel laureates. Toronto is followed by McGill University (35th) and the University of British Colombia (51st).Canada is ranked fourth among the OECD group of countries for research impact, and hosts some of the most highly cited researchers and publications in the world. Source: Nature Index (2020) By Janni Ekrem
StarckGate News about Canada and Subsidies content media
9
0
28
StarckGate
Feb 03, 2020
In StarckGate News
pair of biomarkers of brain function — one that represents listening effort, and another that measures the ability to process rapid changes in frequencies — may help explain why a person with normal hearing may struggle to follow conversations in noisy environments, according to a new study led by Harvard Medical School (HMS) researchers at Massachusetts Eye and Ear. Published Jan. 21 in eLife,the study could inform the design of next-generation clinical testing for hidden hearing loss, a condition that cannot currently be measured using standard hearing exams. “Between the increased use of personal listening devices or the simple fact that the world is a much noisier place than it used to be, patients are reporting as early as middle age that they are struggling to follow conversations in the workplace and in social settings where other people are also speaking in the background,” said senior study author Daniel Polley, HMS associate professor of otolaryngology head and neck surgery and director of the Lauer Tinnitus Research Center at Mass. Eye and Ear. “Current clinical testing can’t pick up what’s going wrong with this very common problem.” “Our study was driven by a desire to develop new types of tests,” added lead study author Aravindakshan Parthasarathy, HMS instructor in otolaryngology head and neck surgery and an investigator in the Eaton-Peabody Laboratories at Mass. Eye and Ear. “Our work shows that measuring cognitive effort in addition to the initial stages of neural processing in the brain may explain how patients are able to separate one speaker from a crowd.” Hearing loss affects an estimated 48 million Americans and can be caused by noise, aging and other factors. Hearing loss typically arises from damage to the sensory cells of the inner ear (the cochlea), which convert sounds into electrical signals, or the auditory nerve fibers that transmit those signals to the brain. It is traditionally diagnosed by elevation in the faintest sound level required to hear a brief tone, as revealed on an audiogram, the gold standard test of hearing sensitivity. “… patients are reporting as early as middle age that they are struggling to follow conversations in the workplace and in social settings where other people are also speaking in the background.” — Daniel Polley, senior study author Hidden hearing loss, on the other hand, refers to listening difficulties that go undetected by conventional audiograms and are thought to arise from abnormal connectivity and communication of nerve cells in the brain and ear, not in the sensory cells that initially convert sound waves into electrochemical signals. Conventional hearing tests were not designed to detect these neural changes that interfere with our ability to process sounds at louder, more conversational levels. In the eLife report, the researchers first reviewed more than 100,000 patient records over a 16-year period, finding that approximately one in 10 of these patients who visited the audiology clinic at Mass. Eye and Ear presented with complaints of hearing difficulty, yet auditory testing revealed that they had normal audiograms. Motivated to develop objective biomarkers that might explain these “hidden” hearing complaints, the study authors developed two sets of tests. The first measured electrical EEG signals from the surface of the ear canal to capture how well the earliest stages of sound processing in the brain were encoding subtle but rapid fluctuations in sound waves. The second test used specialized glasses to measure changes in pupil diameter as subjects focused their attention on one speaker while others babbled in the background. Previous research shows changes in pupil size can reflect the amount of cognitive effort expended on a task. They then recruited 23 young or middle-aged subjects with clinically normal hearing to undergo the tests. As expected, their ability to follow a conversation with others talking in the background varied widely despite having a clean bill of hearing health. By combining their measures of ear canal EEG with changes in pupil diameter, they could identify which subjects struggled to follow speech in a noisy setting and which subjects could ace the test. The authors are encouraged by these results, considering that conventional audiograms could not account for any of these performance differences. “Speech is one of the most complex sounds that we need to make sense of,” Polley said. “If our ability to converse in social settings is part of our hearing health, then the tests that are used have to go beyond the very first stages of hearing and more directly measure auditory processing in the brain.” This study was supported by the National Institutes of Health (grant NIDCD P50-DC015857).
Two biomarkers of brain function can measure the ability to follow conversations in noisy environments content media
7
0
7
StarckGate
Jan 22, 2020
In StarckGate News
Captured carbon dioxide could be used to extract useful metals from recycled technology such as smartphone batteries rather than just being buried underground. The technique could help make it more economical to capture the greenhouse gas before it enters the atmosphere. “By simultaneously extracting metals by injecting CO2, you add value to a process that is known to be cost-intensive,” says Julien Leclaire at the University of Lyon, France. Carbon dioxide is the main cause of modern climate change, so many people have attempted to develop technologies to capture it when it is emitted from power plants and other major sources. The gas can then be stored underground. The problem is that such carbon capture and storage (CCS) is expensive. “No one wants to pay the price for it,” says Leclaire. To make CCS more appealing, Leclaire’s team has found a use for the gas. His team collected CO2 from a car exhaust, cooled it, then pumped it into a mix of chemicals called polyamines. The CO2 combined with the polyamines to make many molecules of differing shapes and sizes. The team found that this process could sort out mixtures of metals, because one metal would dissolve in the liquid while another would form a solid. In a series of experiments, they successfully separated lanthanum, cobalt and nickel – all of which are used in batteries, smartphones, computers and magnets. If the process can be scaled up, it could be a more environmentally friendly way to recycle batteries and other electrical equipment, says Leclaire. This is normally done using highly reactive chemicals such as acids, which are potentially polluting. Replacing them with CO2 should lead to a much lower environmental footprint, he says. Other researchers and companies are trying to convert captured CO2 into useful materials like plastics, which are normally produced from petroleum, but this is chemically difficult. Leclaire says his approach is more in line with how CO2 behaves naturally. “Instead of mimicking what we know how to do better and cheaper with oil, let’s find things you can only do with CO2,” he says. Journal reference: Nature Chemistry
Captured carbon dioxide could be used to help recycle batteries
 content media
8
0
13
StarckGate
Jan 21, 2020
In StarckGate News
Something to look forward to: Some of the biggest problems that need solving in the enterprise world require sifting through vast amounts of data and finding the best possible solution given a number of factors and requirements, some of which are at times unknown. For years, quantum computing has been touted as the most promising jump in computational speed for certain kind of problems, but Toshiba says revisiting classical algorithms helped it develop a new one that can leverage existing silicon-based hardware to get a faster result. Toshiba's announcement this week claims a new algorithm it's been perfecting for years is capable of analyzing market data much more quickly and efficiently than those used in some of the world's fastest supercomputers. The algorithm is called the "Simulated Bifurcation Algorithm," and is supposedly good enough to be used in finding accurate approximate solutions for large-scale combinatorial optimization problems. In simpler terms, it can come up with a solution out of many possible ones for a particularly complex problem. According to its inventor, Hayato Goto, it draws inspiration from the way quantum computers can efficiently comb through many possibilities. Work on SBA started in 2015, and Goto noticed that adding new inputs to a complex system with 100,000 variables makes it easy to solve it in a matter of seconds with a relatively small computational cost. This essentially means that Toshiba's new algorithm could be used on standard desktop computers. To give you an idea how important this development is, Toshiba demonstrated last year that SBA can get highly accurate solutions for an optimization problem with 2,000 connected variables in 50 microseconds, or 10 times faster than laser-based quantum computers. SBA is also highly scalable, meaning it can be made to work on clusters of CPUs or FPGAs, all thanks to the contributions of Kosuke Tatsumura, another one of Toshiba's senior researchers that specializes in semiconductors. Companies like Microsoft, Google, IBM, and many others are racing to be the first with a truly viable quantum commercial system, but so far their approaches have produced limited results that live inside their labs. Meanwhile, scientists like Goto and Kosuke are going back to the roots by exploring ways to improve on classical algorithms. Toshiba hopes to use SBA to optimize financial operations like currency trading and rapid-fire portfolio adjustments, but this could very well be used to calculate efficient routes for delivery services and molecular precision drug development. Adrian Potoroaca
Toshiba says it created an algorithm that beats quantum computers using standard hardware content media
8
0
17
StarckGate
Admin
More actions