Showing posts with label Bring. Show all posts
Showing posts with label Bring. Show all posts

Dynamic Prototyping with SketchFlow in Expression Blend: Sketch Your Ideas And Bring Them to Life!



This book is for designers, user experience pros, creative directors, developers, or anyone who wants to create rich, interactive, and compelling products. If you want to communicate innovative ideas, research, experiment, and prototype in the language of the interface, Dynamic Prototyping with SketchFlow in Expression Blend is the perfect text. Learn how to sketch, iterate, and validate ideas—utilizing the power and productivity within SketchFlow.


Price: $27.99


Click here to buy from Amazon

Bring modern modelling techniques to vehicle design



The MapleSim Connector for VI-CarRealTime allows engineers to incorporate high-fidelity, multidomain models created in MapleSim into the real-time vehicle simulation environment of VI-CarRealTime.

MapleSim, available from Adept Scientific (Letchworth, Herts), is a Modelica-based physical modelling and simulation tool that applies modern techniques to dramatically reduce model development and analysis time while producing fast, high-fidelity simulations. VI-CarRealTime provides a fully validated real-time vehicle simulation environment that automotive engineers can use to optimise vehicle designs and control system performance. By using VI-CarRealTime Vehicle Dynamics engineers can also perform large Design of Experiments (DOE) and multi-objective optimisation studies quickly and easily.


“VI-grade customers can now leverage the intuitive physical modelling environment of MapleSim to create high-fidelity models of vehicle subsystems in a fraction of the time it would take them in other tools,” said Juergen Fett, Managing Director, VI-grade. “Then, they get automatically generated real-time code of the subsystem that they can incorporate into VI-CarRealTime’s full vehicle model, replacing the default subsystem with their own. Not only does this avoid time-consuming and error-prone hand-coding, but MapleSim lets them create extremely detailed models while maintaining real-time simulation speeds. As a result, the simulations’ results are closer to reality which ultimately leads to shorter development cycles and better products.”


“The combination of MapleSim, VI-CarRealTime and the Connector allows for the fast, accurate modelling of automotive subsystems, such as powertrains and drivelines,” said Paul Goossens, Vice President of Applications Engineering at Maplesoft. “Automotive engineers can easily do innovative work, exploring their designs in new and deeper ways, detecting problems earlier in the design cycle, and developing high-quality, practical solutions to their design challenges.”


The MapleSim Connector for VI-CarRealTime is the latest in a growing collection of connector tools and products for MapleSim, which include connectors for Simulink™, LabVIEW™, dSPACE® Systems, C code generation and more.


For more information go to: http://www.adeptscience.co.uk/products/mathsim/maple/tool...


The Maplesoft product range is supplied and supported by Adept Scientific in the UK, Ireland, Scandinavia and the Nordic countries.


About Maplesoft


Maplesoft has over 20 years of experience developing products for technical education and research, offering a solution that applies to every aspect of academic life. Its product suite reflects the philosophy that given great tools, people can do great things.


Maplesoft’s core technology is the World’s most advanced symbolic computation engine which is the foundation for all of its products, including Maple™, the technical computing and documentation environment; MapleSim™, the high-performance, multi-domain modelling and simulation tool for physical systems; and Maple T.A., a web-based system for creating and assessing online tests and assignments.


About VI-grade


VI-grade GmbH is the leading provider of best-in-class software products and services for advanced applications in the field of system level simulation. VI-grade, established in 2005, delivers innovative solutions to streamline the development process from concept to sign-off in the transportation industry, mainly automotive, aerospace, motorcycle, motorsports and railways.


About Adept Scientific plc


Adept Scientific is one of the world's leading suppliers of software and hardware products for research, scientific, engineering and technical applications on desktop computers and has offices in the UK, Germany and throughout the Nordic region. Full details and contact information for all Adept Scientific international offices are available at www.adeptscience.com; or telephone +44 (0)1462 480055.


View the original article here

Dynamic Prototyping with SketchFlow in Expression Blend



This book is for designers, user experience pros, creative directors, developers, or anyone who wants to create rich, interactive, and compelling products. If you want to communicate innovative ideas, research, experiment, and prototype in the language of the interface, Dynamic Prototyping with SketchFlow in Expression Blend is the perfect text. Learn how to sketch, iterate, and validate ideas—utilizing the power and productivity within SketchFlow.


Price: $27.99


Click here to buy from Amazon

Super-detailed CGI human skin could finally cross the uncanny valley, bring realistic faces to games and movies



Computer technology has grown ever more advanced in recent decades, but we reached an impasse a while back where technology collided with biology in an unexpected way. Trying to create digital versions of human faces usually resulted in something bizarre or downright disturbing. The phenomenon, known as the uncanny valley, is still vexing for the movie and game industries. However, a team led by Abhijeet Ghosh and Paul Debevec of the University of Southern California (USC) has developed a method to make artificial faces even more real, perhaps crossing the uncanny valley. It turns out the answers were only skin-deep.


The human brain is precisely tuned to understand what a face is supposed to look like. These subtle cues are deeply ingrained and when we find them missing, the response is often viscerally negative. It can be as simple as muscles around the eyes contracting oddly, or the way lips part during speech. Science is getting closer to nailing down the mechanical processes, but the USC team is tackling the most challenging aspect — skin.


It turns out modeling the reflection of light on skin is extremely complicated because skin itself is extremely complicated. It’s a patchwork of bumps, pores, blemishes, and tiny wrinkles that creep in as you approach middle age. When these details are missing, digital skin doesn’t look real and we venture into uncanny valley territory no matter how accurate the movements are. The technique being developed at USC more accurately simulates skin in a few ways, the first has to do with the lighting.


Each simulated light source is split into four rays — one that bounces off the epidermis, and three others that penetrate the skin to different depths before being scattered. The result is a more natural sheen with realistic shadows.


To make this technology really work, the team also cranked up the level of detail for CGI skin. Using a special scanner, Ghosh and Debevec took extremely high resolution images of human skin from volunteers’ cheeks, chins, and foreheads. Each pixel in the images contained an area only 10 micrometers across (that’s 0.00001 meters, by the way). At this level of detail, a single skin cell is only three pixels wide on average.


The scans were used to generate incredibly detailed 3D renders of skin. When combined with the new simulated lighting, the results are incredibly impressive. The CGI network of pores and bumps make the faces look almost real as the artificial light plays across them.


There has been intense interest from game developers and Hollywood as this project has proceeded. The CGI blockbuster Avatar used a rudimentary version of the USC technology to make the film’s blue-skinned aliens more realistic. Activision and Nvidia have been collaborating with USC in hopes of developing a software package that can generate photorealistic faces on consumer hardware like game consoles and PCs. The day might be fast approaching that your in-game avatar looks completely real in every way that matters.


View the original article here

Super-detailed CGI human skin could finally cross the uncanny valley, bring realistic faces to games and movies



Computer technology has grown ever more advanced in recent decades, but we reached an impasse a while back where technology collided with biology in an unexpected way. Trying to create digital versions of human faces usually resulted in something bizarre or downright disturbing. The phenomenon, known as the uncanny valley, is still vexing for the movie and game industries. However, a team led by Abhijeet Ghosh and Paul Debevec of the University of Southern California (USC) has developed a method to make artificial faces even more real, perhaps crossing the uncanny valley. It turns out the answers were only skin-deep.


The human brain is precisely tuned to understand what a face is supposed to look like. These subtle cues are deeply ingrained and when we find them missing, the response is often viscerally negative. It can be as simple as muscles around the eyes contracting oddly, or the way lips part during speech. Science is getting closer to nailing down the mechanical processes, but the USC team is tackling the most challenging aspect — skin.


It turns out modeling the reflection of light on skin is extremely complicated because skin itself is extremely complicated. It’s a patchwork of bumps, pores, blemishes, and tiny wrinkles that creep in as you approach middle age. When these details are missing, digital skin doesn’t look real and we venture into uncanny valley territory no matter how accurate the movements are. The technique being developed at USC more accurately simulates skin in a few ways, the first has to do with the lighting.


Each simulated light source is split into four rays — one that bounces off the epidermis, and three others that penetrate the skin to different depths before being scattered. The result is a more natural sheen with realistic shadows.


To make this technology really work, the team also cranked up the level of detail for CGI skin. Using a special scanner, Ghosh and Debevec took extremely high resolution images of human skin from volunteers’ cheeks, chins, and foreheads. Each pixel in the images contained an area only 10 micrometers across (that’s 0.00001 meters, by the way). At this level of detail, a single skin cell is only three pixels wide on average.


The scans were used to generate incredibly detailed 3D renders of skin. When combined with the new simulated lighting, the results are incredibly impressive. The CGI network of pores and bumps make the faces look almost real as the artificial light plays across them.


There has been intense interest from game developers and Hollywood as this project has proceeded. The CGI blockbuster Avatar used a rudimentary version of the USC technology to make the film’s blue-skinned aliens more realistic. Activision and Nvidia have been collaborating with USC in hopes of developing a software package that can generate photorealistic faces on consumer hardware like game consoles and PCs. The day might be fast approaching that your in-game avatar looks completely real in every way that matters.


View the original article here

Dynamic Prototyping with SketchFlow in Expression Blend: Sketch Your Ideas...And Bring Them to Life!



This book is for designers, user experience pros, creative directors, developers, or anyone who wants to create rich, interactive, and compelling products. If you want to communicate innovative ideas, research, experiment, and prototype in the language of the interface, Dynamic Prototyping with SketchFlow in Expression Blend is the perfect text. Learn how to sketch, iterate, and validate ideas–utilizing the power and productivity within SketchFlow.


 


Price: $34.99


Click here to buy from Amazon