Tag Archives: GPU

Samsung is taking pre-orders on its Ativ Book 9 Lite

Samsung announced this morning that it is taking pre-orders on its Ativ Book 9 Lite notebook PC, which was first announced on June 20. One of the most interesting features of the new notebook is the custom AMD CPU, which Samsung is describing as a “white-label” quad core.

“It’s something we wanted that was outside [AMD’s] roadmap,” Samsung PC Product manager David Ng told me in a briefing on Friday. Asked why Samsung chose to partner with AMD for this custom CPU, Ng replied “A huge part of it was the superior integrated GPU in AMD’s parts.”

Samsung
The Ativ Book 9 Lite will be available in two colors: “Marble white” or “ash black.”

Be that as it may, Samsung tapped Intel’s new Haswell chips to power its flagship notebook—the Ativ Book 9—which made waves last month when Samsung revealed it would come equipped with a 3200 by 1800-pixel, 13.3-inch touchscreen.

The Ativ Book 9 Lite will also have a 13.3-inch touchscreen, but this one delivers a more down-to-earth resolution of 1366 by 768 pixels. And while Samsung has not announced pricing or availability of its flagship notebook, the Lite model is selling for a reasonable $800 and is expected to reach customers beginning July 28.

To read this article in full or to leave a comment, please click here

…read more

Source: FULL ARTICLE at PCWorld

12 ways Windows 8 dominates the OS competition

While we’re on the topic of hardware, let’s talk monitors. More specifically, let’s talk lots of monitors.

Display die-hards love Windows 8’s deeply improved multi-monitor functionality—it’s a huge step up over Windows 7’s several-screen support. Is it perfect? Nope. But it’s pretty darned good, and an absolute cinch to set up.

Macs and Linux boxes have dead-simple multi-monitor software, too, but they can be a hassle to set up on the hardware side. Mac desktops drive video via
Mini DisplayPort or Thunderbolt connectors. Both technologies are utterly awesome, don’t get me wrong—but neither is anywhere near as
ubiquitous as HDMI, DVI, or VGA. Linux multiple-monitor support works great, except for when it doesn’t. Finding working monitor drivers can occasionally be a hassle, and Linux sometimes stutters while trying to drive multi-monitor setups in multi-GPU rigs.

From: http://www.pcworld.com/article/2035821/12-ways-windows-8-dominates-the-os-competition.html#tk.rss_all

The 10 most important graphics cards in PC history

2009, ATI

One of the last truly powerful cards to bear the ATI nameplate before new owner AMD dropped the “TI” in favor of “MD,” the Radeon HD 5970 was so
masterfully engineered, so infused with mega-wonderfulness that it remains an option even today, some four years later. Indeed, some reviewers of the time
looked at the 5970, this elongated monument to 3D excess, as not only the fastest video card ever, but perhaps also a digital dagger through the heart of rival Nvidia. That notion proved to be a bit much, as our final entry will attest, but this dual-GPU brute (12 by 4 by 1.5 inches, 3.5 pounds)
most assuredly fanned the flames of an already impassioned battle.

Image credit: iXBT.com

From: http://www.pcworld.com/article/2034487/the-10-most-important-graphics-cards-in-pc-history.html#tk.rss_all

What makes a “lightweight” desktop environment lightweight?

Over the last few days I was wondering what is a “lightweight” desktop. And I must say I couldn’t come up with an answer to that question.

I was considering various things like “being memory efficient” which I discarded for obvious reasons. First of all it’s difficult to measure memory usage correctly (I haven’t seen anyone, who provides numbers, doing it correctly, this includes especially Phoronix). And then it’s comparing mostly apples to oranges. Loading a high-resolution wallpaper might make all the difference in the memory usage. Also if desktop environment Foo provides features which are not provided by Bar it’s obvious that Foo uses more memory. But still it’s apples vs oranges. It’s not a comparison on memory, it’s a comparison of features. And of course one might consider the Time-memory-tradeoff.

So is it all about features? Obviously not. If there is a feature a user needs and uses it cannot be bloat. The fact that a desktop environment has only few features cannot be the key to being lightweight. Being evil now: many people say GNOME is removing features, but nobody would say that GNOME is lightweight.

What about support for old systems? That’s not lightweight, that’s support for old hardware. And it’s something which doesn’t make any sense given Moore’s law. Which raises the first question: what is old hardware? One year, two years, ten years? Is it a moving target or is a Pentium III for all time the reference? Optimizing for old hardware means not making use of modern hardware capabilities. But does that make sense to not use modern hardware if it is available? Using the GPU for things the GPU can do better than the CPU is a good thing, isn’t it? Parallelize a computation on multi-core if possible is a good thing, isn’t it? But if you do so, you are optimizing for modern hardware and not for old hardware. So saying you are good for old hardware, implies you are bad on new hardware? Also I’m wondering how one can optimize for old hardware? Developers tend to have new hardware to not have problems like this. And how can one keep support for old hardware when the complete stack is moving towards new hardware? Who tests the kernel against old hardware? Who provides DRI drivers for obsoleted hardware which doesn’t fit into modern mainboards (who remembers AGP or PCI)? Who ensures that software is still working on 32 bit systems, who would notice such a breakage for example in the X-Server? So lightweight cannot be fit for old hardware. And remember: optimizing for old hardware is not the same as optimizing for modern low-end hardware. Even the Raspberry Pi has a stronger CPU (700 MHz) than the oldest Pentium III (450 MHz) – not to mention things like OpenGL…

What’s it then? Let’s ask Wikipedia. For Xfce it tells us, that “it aims to be fast and lightweight, while still being visually appealing and easy to use”. Unfortunately there’s no

From: http://blog.martin-graesslin.com/blog/2013/04/what-makes-a-lightweight-desktop-environment-lightweight/

Accelerated computing / GPUs

By figaro

There are plenty of sources that explain the performance per watt of a computer. However, I wanted to investigate how accelerated computer components (notably GPUs) have become more efficient at a lower price over the years. I have thus defined a metric: performance per watt per price-unit, and plotted these by launch date and launch price.
The results are as follows:

Notes:

  • GFLOPS are single precision
  • Prices are in euro as they were approximately at launch date; if no launch price is known, it has not been proxied with a current price and no plot point is shown
  • Data taken mostly from Comparison of Nvidia graphics processing units – Wikipedia, the free encyclopedia and collection started from the advent of multi-core GPU architectures. The choice of Nvidia was made, because of our professional interest in deploying CUDA and does not constitute an endorsement.
  • Data is on retail components as opposed to OEM components.
  • Launch prices are often artificially high, because it is the feature set that appeals to the enthusiasts who are also the first movers. The price degradation over time (loosely 10% per year) has not been taken into account.
  • The last two plot points are the GeForce GTX Titan and GeForce GTX 650 Ti Boost.

So while performance per watt has increased more than 5-fold over the observed period, the performance per watt per price-unit has not kept up accordingly: almost 4-fold.

In fact, there is even an inverse relationship between the number of cores and the performance metric:

Perhaps the high end cards do not drop in price as much and maintain their price level at launch to finance the development of the lower end cards.

From: http://www.unix.com/high-performance-computing/221051-accelerated-computing-gpus.html

Intel Is Helpless

By Evan Niu, CFA, The Motley Fool

Filed under:

Following the gloomy estimates out of market researcher IDC on the sad state of the PC market in the first quarter, Intel shares dropped by as much as 3%, since the bulk of Chipzilla’s business is still tied directly to the PC market.

Bulls will point to the company’s upcoming line of chips based on its Haswell architecture as a possible catalyst to reinvigorate growth. Intel’s latest and greatest silicon is expected to offer modest CPU performance gains of 5% to 15% (assuming same clock speed as the previous-generation Ivy Bridge) and more impressive integrated GPU performance improvements of 30% to 50%, all while improving power efficiency and battery life. CEO Paul Otellini recently said that Haswell would offer “the single largest generation-to-generation battery life improvement” in the company’s history.

The problem? None of that matters.

Of course performance increases every year. That’s a given. There’s no doubt that Intel continues to create the most powerful consumer microprocessors known to man. When it comes to manufacturing prowess, the company remains unrivaled. But those aren’t the types of things that will sway the average consumer in the market for a computing device, who is increasingly switching to smartphones and tablets.

In some ways, Intel is facing performance oversupply. Most of that raw power likely isn’t being tapped by casual users anyway, so consumers are turning their attention to lower-cost mobile devices where Intel still has no traction. Intel’s fate remains inextricably linked to Microsoft‘s , and Windows 8 is absolutely bombing with the average user. Having a cutting-edge processor does no good if a consumer hates the interface.

J.P. Morgan analyst Mark Moskowitz concurs. He believes that though Haswell may spur some demand within the Ultrabooks, it won’t be enough to offset the broader declines that the PC industry is facing. Of particular concern were shipments in the Asia-Pacific region. Units fell 13% there, which is the first double-digit decline posted in that geography. Moskowitz also thinks that OEMs will start looking for price breaks amid soft demand.

Haswell is more of a threat to NVIDIA than anything else, primarily at the low end of the discrete GPU market. NVIDIA has held up admirably to the threat of integrated graphics, and its high-end discrete GPUs will always blow integrated solutions out of the water.

Last year, NVIDIA investor relations exec Rob Csongor emphasized to me that investors should focus more on the difference between integrated performance and discrete performance, since that difference is where NVIDIA conveys its value proposition to gamers and enthusiasts. With Haswell, that difference will get smaller.

Despite Intel‘s attempts to crack mobile, it still has little to nothing to show for it. Within the PC market that Intel still leans on (63% of revenue last quarter), Intel is helpless.

When it comes to dominating markets, it doesn’t get much better than Intel’s position in the PC microprocessor arena. However, that market is maturing, and Intel finds itself in

From: http://www.dailyfinance.com/2013/04/11/intel-is-helpless/

Of Bitcoins and e-bullion: The sad history of virtual currency

A Bitcoin for your thoughts? With bankers and lawmakers playing the fiddle while the Roman (and Greek and Cypriot) economy burns, the peer-to-peer currency is exploding in popularity, driving the cost of a single GPU-mined “coin” to $200 on Tuesday—a huge leap from the $15 that a single Bitcoin commanded in January.

Yup, digital dollars are worth more than actual ones in these wacky times. But are Bitcoins a bastion of noncentralized strength or just the latest soon-to-fizzle fad in the short, sad history of virtual economies? Keep a hand on your Bitcoin wallet during this stroll down memory lane, folks. Past forays into the computerization of cash have often been amusing, but they’ve never ended well.

…read more

Source: FULL ARTICLE at PCWorld

Super Micro Computer Inc. Schedules Conference Call and Webcast for Third Quarter Fiscal 2013 Financ

By Business Wirevia The Motley Fool

Filed under:

Super Micro Computer Inc. Schedules Conference Call and Webcast for Third Quarter Fiscal 2013 Financial Results

SAN JOSE, Calif.–(BUSINESS WIRE)– Super Micro Computer, Inc. (NAS: SMCI) , the leader in server technology innovation and green computing, today announced that it will release third quarter fiscal 2013 financial results on Tuesday, April 23, 2013, immediately after the close of regular trading, followed by a teleconference beginning at 2:00 p.m. (Pacific Time).

Conference Call/Webcast Information for April 23, 2013

Supermicro will hold a teleconference to announce its third fiscal quarter financial results on April 23, 2013, beginning at 2 p.m. Pacific time. Those wishing to participate in the conference call should call 1-888-395-3227 (international callers dial 1-719-785-1753) a few minutes prior to the call’s start to register. A replay of the call will be available through 11:59 p.m. (Eastern Time) on Tuesday, May 7, by dialing 1-877-870-5176 (international callers dial 1-858-384-5517) and entering replay PIN 2907442.

Those wishing to access the live or archived webcast via the Internet should go to the Investor Relations tab of the Supermicro website at www.Supermicro.com.

About Super Micro Computer, Inc.

Supermicro, the leader in server technology innovation and green computing, provides customers around the world with application-optimized server, workstation, blade, storage and GPU systems. Based on its advanced Server Building Block Solutions, Supermicro offers the most optimized selection for IT, datacenter and HPC deployments. The company’s system architecture innovations include the Twin server, Double-Sided Storage™ and SuperBlade® product families. Offering the most comprehensive product lines in the industry, Supermicro provides businesses of all sizes with energy-efficient, earth-friendly solutions that deliver unmatched performance and value. Founded in 1993, Supermicro is headquartered in Silicon Valley with worldwide operations and manufacturing centers in Europe and Asia. For more information, visit www.supermicro.com.

Supermicro, Server Building Block Solutions, and SuperBlade are registered trademarks and Double-Sided Storage is a trademark of Super Micro Computer, Inc. All other trademarks are the property of their respective owners.

SMCI-F

Investor Relations Contact:
Super Micro Computer, Inc.
Perry G. Hayes
SVP, Investor Relations
(408) 895-6570
PerryH@Supermicro.com

KEYWORDS:   United …read more

Source: FULL ARTICLE at DailyFinance

Kevin DuBois: Mir and Android GPU’s

With Ubuntu Touch, (and mir/unity next) we’re foraying into a whole new world of android drivers. Given the community’s bad memories from the past about graphics, let’s clear up what’s going on, and how we’ll steer clear of the murky waters of new driver support and get rock-solid Ubuntu on mobile platforms.

Android Driver Components and their Openness

First let’s talk about openness. Driver ecosystems tend to be complex, and android is no exception. To get a driver to work on android, the gpu vendors provide:

  1.  a kernel module
    The kernel module must be GPL compatible and this part of the driver is always open. This part of the driver has the responsibility of controlling the gpu hardware, and its main responisibility is to manage the incoming command buffers and the outgoing color buffers.
  2. libhardware implementations from android HAL.
    These libraries are the glue that takes care of some basic operations the userspace system has to do, like composite a bunch of buffers, post to the framebuffer, or get a color buffer for the graphics driver to use. These libraries (called gralloc, hwc, fb, among others) are sometimes open, and sometimes closed.
  3. an OpenGLES and EGL userspace library
    These are the parts that program the instructions for the GPU, and they are the ‘meat and potatoes’ of what the vendors provide. Unfortunately this code is closed source, as many people already know. Just because they are closed source though doesn’t mean we don’t have some idea of what’s going on in them though. They rely on the open source parts and have been picked apart pretty well by various reverse-engineering projects (like freedreno)

All the closed parts of the driver system are used via headers that are Apache license or  Khronos license. These headers are API’s that change slowly, and do so in a (relatively) non-chaotic manner controlled by Google or the Khronos groups. These APIs are very distinct from DRM/gbm/etc that we see on ‘the free stack’

The drivers are not 100% open, and its not 100% closed either.  Without the closed source binaries, you can’t use the core GLES functionality that you want, but enough parts of the system are open that you can infer what big parts of the system are doing. You can also have an open source ecosystem like Mir or android built around them because we interface using open headers.

As far as openness goes, its a grey area; its acceptable to call them blob drivers though

Stability/Performance/Power

We have a lot of bad memories about things not working. I remember fighting all the time with my compiz drivers back in the days of AIGLX and the like. Luckily when we’re working on Mir and phones, we’ve remembered all this pain and have a reasonable way that we’ll jump onto the new driver platform without any wailing or gnashing of teeth.

The biggest advantage we have with the mobile drivers is that they are based around a fixed industry API that has proven itself on hundreds of millions of devices. We’re not reinventing the wheel …read more

Source: FULL ARTICLE at Planet Ubuntu

NVIDIA Stock Is Undervalued, Despite Rumored Nexus 7 Loss

By Adam Levine-Weinberg, The Motley Fool

Shell Eco-marathon: 1959 Fiat 600

Filed under:

NVIDIA shareholders have had their patience tested as the company has missed the recent stock market rally. In the last six months, NVIDIA stock has for the most part stayed between $12 and $13, underperforming the S&P 500 by more than 10%.

Data by YCharts.

Most recently, the stock has come under pressure due to rumors that Qualcomm‘s Snapdragon chip has been selected to power Google‘s next-generation Nexus 7 tablet. The first-generation Nexus 7 was the best-selling device to date running on NVIDIA‘s Tegra mobile processors, helping fuel 50% growth in Tegra revenue last year. Moreover, the Nexus 7’s strong sales performance generated excitement about the company’s opportunity in the rapidly growing tablet market, driving NVIDIA stock to nearly $15 last summer. If Qualcomm has indeed won that design slot for 2013, NVIDIA will suffer a significant revenue headwind this year.

Nevertheless, from a long-term investing perspective, NVIDIA looks like an attractive opportunity. At the end of its most recent fiscal year, the company had over $3.7 billion in cash and investments on its balance sheet. This works out to approximately $6 per share, or nearly half of its market cap. This large cash stockpile provides a good safety net for investors. Meanwhile, the company introduced two promising new products earlier this year — GeForce GRID and Tegra 4i — which should begin to ramp up later this year. Based on these factors, NVIDIA stock seems significantly undervalued at Friday’s closing price of $12.46.

New products on the way
Heavy investments in the development of new products led to a drop in NVIDIA‘s EPS last year, and analysts currently expect another drop this year. The decline of the traditional PC industry, combined with the advance of integrated graphics solutions from Intel and AMD, is reducing the addressable market for NVIDIA‘s legacy GPU business. Clearly, this business will not provide the growth necessary to reverse NVIDIA‘s fortunes. However, I think Mr. Market is underestimating the company’s long-term growth opportunities, which could eventually drive NVIDIA‘s stock price much higher.

The “problem” is that many of these growth opportunities will not be fully realized until next year. That’s why I previously called 2013 “a year of waiting” for NVIDIA shareholders. However, for a long-term investor, that’s not actually a problem; it’s an opportunity! For example, GRID will be NVIDIA‘s first foray into the “cloud” and consists of professional-quality GPUs built into a server. Different variants of GRID will accelerate virtual desktops (NVIDIA is partnering with major software vendors like Citrix, VMWare, and Microsoft for this purpose), replace individual workstations for small and medium-sized businesses, and run graphics for Internet-based gaming services. However, the workstation replacement and gaming versions of GRID will not be ramping up until mid-late 2013.

Similarly, Tegra 4i, NVIDIA‘s first integrated mobile processor, will finally allow the company to gain a foothold in the mid-range smartphone market (which is larger than the tablet …read more

Source: FULL ARTICLE at DailyFinance

Avid Motion Graphics 2.5 Delivers Enhanced Deployment Flexibility with New Configuration Options

By Business Wirevia The Motley Fool

Filed under:

Avid Motion Graphics 2.5 Delivers Enhanced Deployment Flexibility with New Configuration Options

New dual-channel option and stand-alone playout engine enable broadcasters, sports teams, and media producers to more effectively create show-stopping brands

LAS VEGAS–(BUSINESS WIRE)– Avid®(Nasdaq: AVID) today announced the release of Avid Motion Graphics™ 2.5 (AMG), the company’s next-generation on-air graphics production platform for broadcasters, sports teams, and media producers. A new dual-channel configuration option and stand-alone playout engine increase flexibility and speed graphic workflows to create stunning imagery and get work quicker to air.

Avid Motion Graphics has allowed us to engage fans and create excitement in ways that would have been impossible before,” said Craig Wilson, manager of Production and Creative Services, Corporate Marketing, St. Louis Cardinals. “The transition to AMG from the Avid Deko® solution has been a breeze. We can use our Deko graphics within the AMG environment, and easily link data from a variety of sources to any graphics parameter. With AMG, we’re extending the world-class St. Louis Cardinals brand that keeps our fans coming back time after time.”

AMG 2.5 features new configuration options that let customers get more high-quality graphics faster and more effectively to air, including:

  • Dual-channel configuration option — Increases playout capability, cost-effectiveness, and flexibility by adding a second GPU, I/O card, and more storage to support graphics playout for two channels.
  • AMG Engine — Stand-alone playout engine that allows customers to create a distributed system architecture, separating graphics creation from graphics playout. This provides cost-effective workflow options for larger operations.
  • Operations Console – Consolidates configuration of multiple AMG playout channels into a single centralized interface.
  • Dual-channel upgrade —Increases the playout capacity of an existing AMG system with minimal downtime, by providing a field-installable dual-channel upgrade.

“In today’s highly competitive, image-dominated media industry, stunning on-air graphics are critical to building winning brands,” said Dana Ruzicka, vice president of Segment and Product Marketing at Avid. “Content producers need the ability to more quickly and easily create visual imagery …read more

Source: FULL ARTICLE at DailyFinance

Review: MotionArtist lets artists become animators without learning code

Kim Jong Un

Motion Artist 1.0 generates interactive HTML 5 video presentation of comics and more.  This full release offers more customization of animation files, tighter recording controls, and better asset editing compared to the July 2012 betaMotion Artist ($60, buy-only) makes animating images and text relatively easy for comic artists and web designers with imported files. Unlike with Flash, you can’t draw in the program and then animate their creations. The focus is on animating existing image files from other sources.

Artwork by Karen Luk
The blue line with dots represents an animation path. Under Project Settings, Motion Artist offers common video dimension sizes.

Motion Artist opens to a default project designed by Smith Micro, showcasing various animation techniques. However, comic artists unfamiliar with using an animation program or film terms might find all the controls tricky to animate their comic pages. Imported PSD files maintain their layers for animation or the user can composite the layers into a single layer. JPG, PNG and Motion Artist vendor Smith Micro‘s Anime Studio are other supported file types.

Motion Artist has three different views: Director, Camera and Panel. Animators will recognize the toolbar and scene list, and the timeline setup with its default of 30 frames per second. Thanks to GPU acceleration, users can play working files back in real time, which assists in editing the video. In addition to using your own video, you can animate panels, text and speech balloons with various effects in Motion Artist.

Comic artists can use difference scenes to cut between comic panels or pages. For example in film, opening credits can be the first scene, followed by the next one of the characters walking into camera view. In comics, it can be moving from one panel to the next or page to page. Motion Artist is set up for multiple scenes, so comic artists can animate individual pages or panels and then cut them together for a single presentation.

To read this article in full or to leave a comment, please click here

…read more

Source: FULL ARTICLE at PCWorld

Bitcoin mining malware spreading on Skype, researcher says

Security researchers from Kaspersky Lab have identified a spam message campaign on Skype that spreads a piece of malware with Bitcoin mining capabilities.

Bitcoin (BTC) is a decentralized digital currency that has seen a surge in popularity since the beginning of the year and is currently trading at over US$130 per unit making it an attractive investment for legitimate currency traders, but also cybercriminals.

BTCs are generated according to a special algorithm on computers using their CPU and GPU resources. This operation is called Bitcoin mining and is usually performed by users who operate multi-GPU computer rigs. However, mining efforts can also be pooled for better results.

Cybercriminals have figured out that distributed Bitcoin mining is a perfect task for botnets and have started developing malware that can abuse the CPUs and GPUs of infected computers to generate Bitcoins.

To read this article in full or to leave a comment, please click here

…read more

Source: FULL ARTICLE at PCWorld

Analysts Debate: Is NVIDIA a Top Stock?

By Alex Planes, Sean Williams, and Travis Hoium, The Motley Fool

Filed under:

The Motley Fool has been making successful stock picks for many years, but we don’t always agree on what a great stock looks like. That’s what makes us “motley,” and it’s one of our core values. We can disagree respectfully, as we often do. Investors do better when they share their knowledge.

In that spirit, we three Fools have banded together to find the market‘s best and worst stocks, which we’ll rate on The Motley Fool’s CAPS system as outperformers or underperformers. We’ll be accountable for every pick based on the sum of our knowledge and the balance of our decisions. Today, we’ll be discussing NVIDIA , one of the world’s leading graphics chip makers — and a growing presence in mobile devices.

NVIDIA by the numbers
Here’s a quick snapshot of the company’s most important numbers:

Statistic

Result (TTM or Most Recent Available)

Market cap 

$7.6 billion

P/E and forward P/E

14.3 and 11.8

Revenue

$4.3 billion

Net income

$563 million

Free cash flow

$641 million

Return on equity

12.5% 

R&D ratio

26.8%

Market share

  • Discrete graphics processors: 17% 
  • Smartphone applications processors: 5% 
  • Tablet applications processors: 17% 

Sources: Morningstar, YCharts, and news reports.

Alex’s take
I think NVIDIA has a lot going for it, but there are a few reasons for caution, as well. For one thing, the company is finally making moves into integrated smartphone processors with the Tegra 4i, its first with LTE capability. That will take the fight to longtime integrated-smartphone-chip category killer Qualcomm and its Snapdragons. NVIDIA also happens to be the top chip maker for Google‘s Android tablets. That doesn’t mean as much today as it’s likely to in the future, as lower-cost tabs undermine the iPad’s dominance.

On the flip side, NVIDIA isn’t actually a category leader anywhere. Apple still dominates tablet chips by dint of its in-house chip designs. Qualcomm owns smartphone processors. You might think that NVIDIA leads in PC graphics chips, but that’s not true, either — Intel has a commanding lead in that segment, serving 63% of the GPU market to NVIDIA‘s 17% in the fourth quarter of 2012. The Tegra 4i might change NVIDIA‘s fortunes in smartphones, but the company remains heavily reliant on its legacy graphics processors, as 81% of 2012 revenues came from the GPU segment. The Tegra segment has a long way to go to take over as NVIDIA‘s moneymaker.

In the end, after doing a detailed analysis of what I felt were NVIDIA‘s strengths and weaknesses, I’ve decided that it’s best to stay on the sidelines until the company’s future becomes clearer. To see how I arrived at this decision, click here to read my full report.

Sean’s take
NVIDIA has moved well beyond being just a graphics company, despite Wall Street‘s instance on valuing the company as if its Tegra line of processing chips …read more

Source: FULL ARTICLE at DailyFinance

NVIDIA Introduces Enhanced Line of GPUs

By Chris Neiger, The Motley Fool

Filed under:

Today, NVIDIA announced a new line of five notebook GPUs, called the GeForce 700M line. The company mentioned three new features that it says will improve the user experience:

  • NVIDIA GPU Boost 2.0 technology for better graphics performance.
  • NVIDIA Optimus technology, which turns the GPU on or off to preserve battery life.
  • GeForce Experience software, which “adjusts in-game settings for the best performance and visual quality specific to a user’s notebook” and keeps drivers up to date.

“With no effort or input from the notebook user, the technologies work in the background to save battery life, enhance performance and enrich the visual experience — providing the best notebook experience the GPU can deliver,” the company said in a press release.

The company added that “every leading notebook manufacturer” will be rolling out machines with GPU Boost, including  2.0 technology, including Acer, Asus, Dell, Hewlett-Packard, Lenovo, MSI, Samsung, Sony, and Toshiba.
 
The new line of GPUs will be used in the performance and mainstream segments and are available today.

The article NVIDIA Introduces Enhanced Line of GPUs originally appeared on Fool.com.

Fool contributor NVIDIA. Try any of our Foolish newsletter services free for 30 days. We Fools don’t all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy.

Copyright © 1995 – 2013 The Motley Fool, LLC. All rights reserved. The Motley Fool has a disclosure policy.

(function(c,a){window.mixpanel=a;var b,d,h,e;b=c.createElement(“script”);
b.type=”text/javascript”;b.async=!0;b.src=(“https:”===c.location.protocol?”https:”:”http:”)+
‘//cdn.mxpnl.com/libs/mixpanel-2.2.min.js’;d=c.getElementsByTagName(“script”)[0];
d.parentNode.insertBefore(b,d);a._i=[];a.init=function(b,c,f){function d(a,b){
var c=b.split(“.”);2==c.length&&(a=a[c[0]],b=c[1]);a[b]=function(){a.push([b].concat(
Array.prototype.slice.call(arguments,0)))}}var g=a;”undefined”!==typeof f?g=a[f]=[]:
f=”mixpanel”;g.people=g.people||[];h=[‘disable’,’track’,’track_pageview’,’track_links’,
‘track_forms’,’register’,’register_once’,’unregister’,’identify’,’alias’,’name_tag’,
‘set_config’,’people.set’,’people.increment’];for(e=0;e<h.length;e++)d(g,h[e]);
a._i.push([b,c,f])};a.__SV=1.2;})(document,window.mixpanel||[]);
mixpanel.init("9659875b92ba8fa639ba476aedbb73b9");

function addEvent(obj, evType, fn, useCapture){
if (obj.addEventListener){
obj.addEventListener(evType, fn, useCapture);
return true;
} else if (obj.attachEvent){
var r = obj.attachEvent("on"+evType, fn);
return r;
}
}

addEvent(window, "load", function(){new FoolVisualSciences();})
addEvent(window, "load", function(){new PickAd();})

var themeName = 'dailyfinance.com';
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-24928199-1']);
_gaq.push(['_trackPageview']);

(function () {

var ga = document.createElement('script');
ga.type = 'text/javascript';
ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';

var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(ga, s);
})();

Read | <a target=_blank href="http://www.dailyfinance.com/2013/04/01/news-nvidia-new-gpu-line/" rel="bookmark" title="Permanent …read more
Source: FULL ARTICLE at DailyFinance

Review: Dell Latitude 6430u offers high quality throughout

Whatever happens to Dell in the near future, let’s hope that the company keeps making notebooks as nice as the Latitude 6430u. In a world of cheap-feeling merchandise, it stands apart. It’s sedately handsome, ruggedized, remarkably stable in your hand and on the table, and a tactile joy to use. Its also fast and available with a wide variety of warranty and support options, as a good corporate computer should be.

Why on earth would anyone start a review by talking about a laptops physical stability and feel? Partly because its better than talking about the 6430us 4-pound bulk, but also because theres palpable pleasure in handling a well-balanced unit sporting a silky, soft-to-the-touch feel. The unusually uniform weight distribution comes courtesy of the flat battery pack that occupies the lower front quarter of the unit. That balance makes the unit feel lighter than it actually is.

The 6430u’s heft also makes it a very stable typing platform, which accentuates the already nice feel of the Chiclet-style keyboard. The keyboard response is a little lighter than a Lenovo’s, but combined with the solidity of the 6430u, the overall experience might actually be a wee bit better—high praise in my book. The keyboard is backlit, and you can control the lighting intensity via a function-key combination. The unit’s touchpad has silky-smooth response, and buttons on the top and bottom reduce hand travel when clicks are required. Even the eraserhead pointing device is well adjusted. Someone at Dell obviously spent considerable time designing the ergonomics.

Our test configuration of the 6430u sported an Intel Core i5-3427u processor, 8GB of DDR3 memory, and a 128GB Samsung PM830 solid-state drive, which helped the unit earn a very capable score of 78 on PCWorld’s WorldBench 8 test suite. The integrated Intel HD 4000 GPU’s game play is mediocre but doable at 800 by 600 or so. There’s no discrete-GPU option, but this is a business machine first and foremost.

To read this article in full or to leave a comment, please click here

…read more
Source: FULL ARTICLE at PCWorld

Nvidia launches a “sweet spot” GPU of its own

Nvidia today fired the next salvo in the GPU arms race: The GeForce GTX 650 Ti Boost. Nvidia and arch-rival AMD have decided that 30 frames per second at 1080p resolution is the gaming sweet spot, and so the GPU designers have set about beating each other over the head to build the best chips for delivering that performance at a $150 price point.

While both companies happily oblige gamers craving higher performance-those with the financial means to satiate their hunger, that is-it’s the mainstream products that generate the most cabbage. And they’re certainly justified in designating 1080p a “mainstream resolution,” since that’s the spec most consumer-oriented 23- and 24-inch displays deliver.

To that end, Nvidia is looking to chop the legs out from under AMD, which announced its own “1080p sweet spot” offering- the Radeon HD 7790 -just last week. AMD set a price target of $150 for Radeon HD 7790 cards in order to compete with boards based on Nvidia’s GeForce GTX 650 Ti, which until then had been the sole occupants of that price bracket.

Nvidia
The price of cards based on Nvidia’s new GeForce GTX 650 Ti Boost GPU will start at $149.

AMD maintains that its 7790 is on average 20 percent faster than Nvidia’s original 650 Ti. But Nvidia announced that existing GeForce GTX 650 Ti cards with 1GB of memory will now sell for just $129. The new and faster GeForce GTX 650 Ti Boost-which Nvidia claims is on average 40 percent faster than the original 650 Ti, and 10- to 20-percent faster than AMD‘s pricier Radeon HD 7850-will sell for $149. But there’s a slight catch: The 1GB 650 Ti Boost cards won’t begin shipping until early April. The models Nvidia says consumers can buy today come with 2GB of memory and cost $169.

To read this article in full or to leave a comment, please click here

…read more
Source: FULL ARTICLE at PCWorld

OpenGL in Qt 5.1 – Part 4

This article continues our series on what is new in Qt 5.1 with respect to OpenGL. Earlier articles in this series are available at:

OpenGL Debug Output

The traditional way to debug OpenGL is to call glGetError() after every GL function call. This is tedious, clutters up our code, and doesn’t warn about performance issues or other non-error situations. In the last couple of years various debug extensions have been proposed and have proven their usefulness. These have very recently been unified into the GL_KHR_debug extension for both OpenGL and OpenGL ES.

KDAB engineer Giuseppe D’Angelo has exposed the functionlity of the GL_KHR_debug extension via the new class QOpenGLDebugLogger which will also be part of Qt 5.1. The QOpenGLDebugLogger class can be used to either request previously logged messages from OpenGL or to emit a signal each time OpenGL logs a message. The QOpenGLDebugLogger::messageLogged() signal can be connected up to a slot where you can respond to the message appropriately, say by outputting using qDebug(), handling the error etc.

The signal can be emitted either asynchronously for minimal performance impact on your running application, or synchronously and with a larger performance impact. Although the synchronous approach has a cost, it does have one massive advantage. Setting a break point in a slot connected to the messageLogged() signal will allow you to break execution and see the stack and the exact OpenGL function call that caused the error or
warning. This is incredibly useful when debugging OpenGL applications and not a glGetError() call in sight!

Using the above mechanism, OpenGL is also able to provide informational messages to us as well as errors. These may include data about where particular vertex buffer objects reside (GPU or CPU memory), if the correct usage hint has been given for a buffer object, or if we are violating it and causing the driver grief resulting in performance issues. All of these and more are now trivially available to us. It is even possible for your application or Qt to inject their own messages into the OpenGL logging system and we can filter based upon message type, severity etc.

Using the QOpenGLDebugLogger is very simple:

void Scene::initialize()
{
    m_logger = new QOpenGLDebugLogger( this );

    connect( m_logger, SIGNAL( messageLogged( QOpenGLDebugMessage ) ),
             this, SLOT( onMessageLogged( QOpenGLDebugMessage ) ),
             Qt::DirectConnection );

    if ( m_logger->initialize() ) {
        m_logger->startLogging( QOpenGLDebugLogger::SynchronousLogging );
        m_logger->enableMessages();
    }

    // Populate a buffer object
    m_positionBuffer.create();
    m_positionBuffer.setUsagePattern( QOpenGLBuffer::StreamDraw );
    m_positionBuffer.bind();
    m_positionBuffer.allocate( positionData,
                             ...read more 
Source: FULL ARTICLE at Planet KDE

AMD's newly-announced Radeon HD 7790 guns for the budget 1080p gaming crown

Last month’s release of Nvidia’s Titan graphics card—the most powerful consumer GPU ever announced—may have inspired uncontrollable drooling among the enthusiast crowd, but at a cool $1000, the card simply isn’t priced to move. AMD‘s latest release takes a different tack. Today, the company announced the Radeon HD 7790 series graphics card, a $150 mid-range GPU designed to bring better 1080p gaming to the masses.

The Radeon HD 7790 fills a hole between the Radeon 7770 GHz Edition, which is typically priced between $100 and $110, and the $180 and up Radeon 7850. At $150, the Radeon HD 7790 directly competes against Nvidia’s GeForce GTX 650 Ti, which has thus far been sitting uncontested at that particular price point.

Most of AMD‘s press materials for the Radeon HD 7790 unsurprisingly compare its 1080p gaming performance against Nvidia’s counterpart, with AMD‘s card claiming frame rate victories to the tune of 8 to 32 percent across a slew of games—and a whopping 67 percent frame rate lead over the Nvidia GTX 650 Ti in DiRT Showdown. (That game heavily favors AMD graphics cards, to be fair.) AMD claims the Radeon HD 7790 performs delivers “an average performance advantage of up to 20 percent over the GTX 650 Ti.”

AMD
AMD‘s benchmarks compare the Radeon 7790 against the Nvidia GTX 650 Ti at 1080p resolution.

The Radeon HD 7790 offers full DirectX 11.1 support and works just fine with EyeFinity multi-monitor setups, though frame rates will obviously drop if you’re rocking several displays. Fortunately, AMD loaded the Radeon HD 7790 with CrossfireX support just in case you want a graphical boost down the line. The GTX 650 Ti, on the other hand, doesn’t support multi-card solutions.

To read this article in full or to leave a comment, please click here

…read more
Source: FULL ARTICLE at PCWorld

HP Envy Phoenix h9-1420t review: Gaming power in a subtle form

By gaming standards, the HP Envy Phoenix h9-1420t’s appearance is positively subdued. This midsize tower PC has some red backlighting and a clear pane so that you can gaze at the liquid cooling unit, but aside from that it could easily pass for a conventional HP desktop. Although it doesn’t have much in the way of bling, the Phoenix delivers better-than-average performance at a cheaper-than-boutique price. Down-the-road upgrade options, on the other hand, are limited by its decidedly nonenthusiast motherboard.

Components and performance

Our $1840 h9-1420t test configuration sported an unlocked 3.5GHz Intel Core i7-3770K processor. Thanks to the liquid cooling unit, the system had no problem maintaining 4GHz, and it likely has at least a little more headroom. The Pegatron (that’s Asus’s OEM arm) 2AD5 motherboard offers minimal overclocking controls in its BIOS, but it isn’t completely locked down. You can set each core’s maximum frequency multiplier separately, but you get no provisions for tweaking the operating voltage, for instance. The board also has just a single full-size PCIe slot, so you can forget any dual-card graphics upgrade via SLI or CrossFire.

Fortunately, HP picked a strong graphics card, inserting an Nvidia GeForce GTX 680 with 2GB of GDDR5 memory. With that card in place, the Phoenix managed a playable frame rate in Dirt Showdown right up to the 2560 by 1600 resolution of our 30-inch test display. The game wasn’t as silky smooth at that resolution as it was at lower ones, but it was certainly playable. Should you decide to buy an h9-1420t online, HP allows you to customize the configuration to a degree, but your options don’t include Nvidia’s best GPU, the GeForce GTX 690.

The other core components on our test machine included 12GB of DDR3-1600 memory and a 2TB, 7200-rpm hard drive, which helped the h9-1420t produce a very good WorldBench 8 score of 87. A solid-state drive would have boosted the score even more, but that option wasn’t available when we ordered our evaluation unit. HP has since corrected that omission, but there’s no getting around that single multilane PCIe slot, which is a puzzling design decision in a PC whose primary reason for existence is performance.

To read this article in full or to leave a comment, please click here

…read more
Source: FULL ARTICLE at PCWorld