this is exactly the reason I am getting it. I carry it in a backpack on a weekly basis and I need to be able to put other things in the bag besides an oversized laptop.
-
Waiting on the review of the SG first.
-
[email protected] Notebook Consultant
So, no, they will not outlast children. That's a poor claim made over and over by "enthusiast PC" advocates. Taking shock out of the equation, on the base physics, platter writes -- writes here being key -- are actually far, far more deterministic (although platter's 10^13 error rate is too high in the age of TBs). Every single NAND block has bit errors, and it comes down to managing them with on-board controller logic with the writeback-checks and other things. That's before we even look at rotation of the blocks if and when they do become infeasible to continue using them.
I.e., everything the on-board controller does is attempt to mitigate risk during writes, because errors constantly occur when one does.
NAND is most ideal for random access reads, static information. Unfortunately Windows PCs are very, very write-heavy, and general purpose file systems are not designed for NAND. NT does not read-only boot. NT was not designed for separate dynamic, variable/temporary and static data, especially programs, settings, etc... And the FAT file system is less ideal from a write standpoint than others. People go on about "trim" and "discard," but don't understand that is just a very, very elementary solution to keep the general purpose filesystem, which is already a poor design for NAND, from overusing it.
SIDE NOTE: I don't think people realize how much NT writes during boot, and the real problem of every NT program requiring a start-up directory (legacy Win32 even required one that is writeable). These are integrity and security issues that Microsoft has long struggled with, and are major issues in the embedded world. They cannot get control of them without breaking compatibility, although they have come up with an increasing number of hacks.
If you write heavy to NAND, expect to get errors over time. Most of the tests done are done in ways that are not real operating environments. Just like platter tests done are similarly not under such either. Fortunately there are some new file system designs coming down for some OSes that attempt to mitigate issues further. Even Microsoft is still trying to get their 18 year-old CarioFS lineage to a point it can be broadly used to solve many isuses with the FAT design of NTFS. And, again, that's still a general purpose filesystem design, and not one that does write verification.
SIDE NOTE: On write verification, in Microsoft's defense, only experimental filesystems (e.g., btrfs) do it on other platforms, other than Oracle/Solaris' ZFS (which is now production-proven).
I honestly hope Windows 10 solves some of these issues. But all Microsoft has done so far, in Windows 7+, is bring forward their old Embedded NT "overlap" approach. We'll see where the NVMx generation goes in this matter, as it can bring some new ideas to bear on the problem.Oranjoose likes this. -
Even with worst case scenario 1tb TLC will last 5+ years
http://www.anandtech.com/show/8520/sandisk-ultra-ii-240gb-ssd-reviewericc191 likes this. -
HTWingNut, have you tried booting up via a PCIe SSD?
-
should work without problem, but only on UEFI-enabled win8 systems
-
[email protected] Notebook Consultant
But general purpose OSes and filesystems have a long way to go to reach what embedded has long been doing. Again, people talk trim/discard like that's something big. It's very minor. It just mitigates the writes that a general purpose filesystem, running a general purpose OS, would do nominally. But even with that mitigation, they do a lot of writes, compared to embedded (let alone typical NT v. platforms that have read-only boots, modes, separation of dynamic, var/tmp, static, etc...).
E.g., Things like Intel Smart Response Technology (SRT) exist for a reason. Because the most ideal application of NAND is its massive read latency advantage over spindle, without the errors of writing regularly. It's not just a cost detail.Oranjoose likes this. -
[email protected] Notebook Consultant
The future of NVMx will bring a lot of changes in how general purpose OSes, their filesystems and storage interact. For example ... journaling.
Virtually all general purpose filesystems today do at least "me ta"-data journaling. This means the information on a file is committed to a journal, a special allocation in the file system first, then the data blocks actually written, and the "me ta"-data "finalized" to their proper location, instead of the journal. That right there is an operation that often uses up a block, if not several. In fact, if the filesystem does inherent verification and bitrot checking, it's after the data is written that it's compared against the checksums, before the -data commit is actually fanaiized.
NOTE: I'm oversimplifying how journaling works, as FAT and inode are very different in design, hence the use of the word "me ta."
Ideally, journals should external to the store, like non-volatile RAM (NVRAM). Typically this is either capacitor-backed Static RAM (SRAM) or, more affordable/sizeable, battery-backed Dynamic RAM (DRAM). Now what do commodity NAND devices have in-device already? Usually both, they are very intelligent devices with controller with local SRAM and additional, sizeable DRAM buffers to deal with commits as NAND is very, very slow at writes. So ... if the interface and drivers could work with the OS and its general purpose filesystem to take advantage of the NAND device's own, NVRAM stores.
No longer is there a need to allocate a special set of blocks in the filesystem for the journal, but the filesystem uses an external journal via these special NVMx functions. At the same time, you get the nice benefit of removing the nominal "double-commit" for journal, and waste of NAND write cycles. Win-win-win. Heck, we might even get to the point with typical NAND back storage where full data journaling can make a return "for free," using the on-board NVRAM in a typicla device.
And that's just one example that is a reality in the new world of PCs with NAND-heavy stores. The whole SATA interface (and Serial Attached SCSI[-3], SAS, for that matter) and approach is really legacy, for platters. We use NAND very, very inefficiently. Sitting there and doing block and other tests is hardly "real-world." I've spent a number of years dealing with this, and knowing the underlying design of typical NAND allocations (all of them have errors from the factory, it's how the physics works). Everything is about risk mitigation, not elimination.
And the general purpose OS and filesystem will always taxi them.Oranjoose likes this. -
Yes but I trust sandisk in having strong knowledge in wear-leveling and error correction coding .
-
I thought this was a P650 thread.. With every new answer here i hope to read something interested about this laptop. I think there is a SSD topic on this forum ?
Hope to get on topic now here with u guys No hard feelings -
[email protected] Notebook Consultant
I.e., to take full advantage of things like NVMx, we'll need the OS to be far more capable than looking at NAND devices as just another ATA store.
But yes, the major problems with booting on the PC architecture has been related to the legacy BIOS Int13h Extend Disk Services, and how NT has been designed around them since NT4 SP4. They fixed a number in NT6.0 (Vista), but even 6.1 and 6.2 releases (various 7/8 SP releases) have found pre and post-BOOTMGR/NTOSKRNL aspects that are slowly being addressed, just for the basic block interfaces.
I.e., Serial ATA (SATA) was designed for AT Attachment (ATA) compatibility, which of itself was a 16-bit "dumb" interface going back to the original IBM PC/AT that has been adopted to a 4-layer design that is 32-bit software stack to handle 512 byte allocations. If it's not SATA compatible, there are all sorts of layers that need to be handled by the firmware (e.g., UEFI), OS boot loader (BOOTMGR/NTOSKRNEL in NT6+) and OS itself post-kernel load.
**NOTE: I'm not just talking about logical v. physical. Even the "logical" is just a construct. In reality, the OS needs to be working on much, much larger blocks (512KiB-1MiB), especially when it comes to commits.Oranjoose likes this. -
[email protected] Notebook Consultant
The simple matter is that cells that store 3-bits eventually reach a point they are unviable within a hundred writes. Not around 1,000 like MLC, and not around 10,000 with SLC, but just a single hundred. Now we can use all kinds of parity mechanisms, even get creative with various encoding for fault-tolerance. But at some point, the simple physics at work is very much involved, even if we rotate them and mark blocks unviable, it still happens.
I mean, I can boot a Live CD and throw some extensive "dd" tests at a NAND device, which is what many have done. But that won't give me real-world experiences.
I.e., I write, say, a typical 1MiB sequential or maybe even many dozen, random 64KiB writes at a device, and can claim X errors on Y MiB. But that doesn't cover the case of a FAT update for a file in Windows, or an attribute that modifies a 512 byte inode in Linux. It also doesn't cover the countless writes that happen in NT constantly that, in total, maybe only modify 1MiB of actual data -- yet actually commit a hundred of times. Some of the overlay in NT6.1+ (7+/2008+) helps, along with how bdflush has always worked in Linux since circa 2.0 late '90s.
That's how general purpose OSes and their filesystems work. Having trim or discard to deal with changing filesystem block allocations doesn't impact those realities much.
It's why everyone recommends one doesn't have a swap file in Windows if you don't have anything but a NAND device, along with no swap (or just set vm.swappiness = 0) in Linux. Of course NT doesn't have something like tmpfs, let alone temp files are all over, along with logging, etc... There are ways to mitigate portions of /var as well in Linux, although it will always be the weak point of any POSIX (Linux/UNIX) platform. But even that still doesn't mitigate typical, common, real usage of any FAT or inode system, even if NT and FAT suffers far more because of the lack of separating temporary, variable and other files from static binaries and files, user dynamic content, etc...
Although the separate and finally standardized (only after about 12 changes since the mid-'90s) C:\Users helps now, since it could be a NT mount elsewhere than the NAND device.Oranjoose likes this. -
[email protected] Notebook Consultant
I've been doing NOR, NAND and other stores since the '90s. I've dealt with NAND failures over the years, including with the commodity solutions throughout the last deace. It's all just risk mitigation. One can't change the physics, although they are working on the general OS/filesystem issues that are the core culprits.
The fact that the P650x will have a PCIe x4 slot, and will adopt NVMx as a result, will provide for an interesting future. Because SATA was never designed for commodity NAND, or really any intelligent EEPROM technology store for that matter. -
-
Sorry but i will keep trolling until i get a damn review, picture or eta of the P650SG dammit
Well since SSDs will be common in the P650 it's always interesting to know how to choose them correctly . Plus we're future P650sx owners -
[email protected] Notebook Consultant
In any case, for boot, one could still use a different M.2 (or SATA port) devices for the uEFI System Partition (ESP) from the NT "System Volume" (BOOTMGR/NTOSKRNL) and "Boot Volume" (\WINDOWS). [1] One could even put the "System Volume" on the same device as the ESP if required, separate from "Boot Volume" too. One or the other should solve the problem quite nicely assuming, of course, neither Intel nor Microsoft has crippled the firmware/driver purposely to prevent the PCIe device from working at all for the "Boot Volume" (\WINDOWS), which is typically the "C:" drive (while the "System Volume" gets "D:", even though it loads and is enumerated first).
It's the one thing I cannot stand about Intel, and drives me right to AMD when I'm building engineering labs on a budget (i.e., don't get me started on VT-x and VT-d v. AMD-V with regards to IOMMU and SRV-IO). And, again, Microsoft has a longstanding history of the NT block drivers requiring specifically formatted information, etc... in hardware, beyond the hidden sectors too, just for boot-time support.
Although in the case of Microsoft, the 128MiB (32MiB if the disk is under 16GiB) Microsoft Reserved (MSR) Partition is supposed to remove most of the need for hidden/undocumented hardware/software information for boot and run-time support with the GUID Partition Table (GPT). Although I haven't tested this fully yet, and there might be some other issues.**
I've been doing uEFI-GPT boot since 2010 on Linux, and documented some of the earliest GRUB 0.97/1.98 (v1/v2) issues. NT was way, way behind back then, although several hotfixes, and even a full SPs, have addressed the issues over time. That was the case back in the '90s as well with NT4, and SP4 "fixed" it in a weird way for large MBR boots that are still with us (and dominated DOS7/Win9x until its retirement). But at least they created the MSR this time around for GPT, although it probably doesn't mitigate everything.
I.e., you'd be surprised of the number of endless hacks I've found in hidden sectors on and even registers in the controller for a hard drive done by or required by Microsoft for boot over the years. Embedded NT and CE developments taught me a lot.
-- bjs
[1] https://support.microsoft.com/kb/100525
Yes, I know that seems opposite of what the terms should be. But it's a Microsoft'ism that has been long standing since NT3.1.
**P.S. Professional Preference: I always create every GPT disk with the first 895MiB (1-896MiB) for ESP and then the next 128MiB (896-1024MiB) for the MSR, which starts the rest of the disk on a 1GiB boundary (which is always going to be aligned for any sub-allocation). Why such a large ESP? Some of us also like having the option of an EFI Shell ... in case an OS doesn't boot. At a minimum, I wouldn't create an ESP smaller than 383MiB (1-384MiB), with the next 128MiB (384-512MiB) for the MSR.Oranjoose likes this. -
[email protected] Notebook Consultant
In all honesty, I have *0* issue with a moderator moving my posts to an appropriate thread. Correspondingly, anyone can always suggest any alternative threads and I will take the time to repost there, reducing the original response here to nothing more than a "stub."
The only thing that kills me is when someone deletes a post, instead of relocating it, or asking me to do so. Then a few month, someone has an exact question or even issue where it applies, and I no longer have the text I wrote to find in a search.
I.e., No logic can mitigate the nominal error rate that will occur in the device itself, only the chance the user will be a victim to one of them, as they do very much constantly occur. -
-
-
-
the notebookcheck review showed that the CPU hits 100C during their stress test, has anyone experienced something similar after couple hours of BF4 or another demanding game?
-
-
Just wait for HT's review, it will tell you everything you need to know. -
I'm working on it, I'm working on it!
Here's a sneak peak:
If you can tell me that movie I'll give you a cookie.Oranjoose, Larry@LPC-Digital, IKAS V and 6 others like this. -
-
What is this new batman laptop? Link please. -
I have no idea how long it's going to take to get this laptop (XoticPC). Going to the bank tomorrow to pay for it. I'm going to be pessimistic and say 2 weeks.
-
-
-
-
All I wanna do is play GTA V. Will 980M 4GB suffice? I'm getting tired of waiting for Batman
m033dkhan likes this. -
Grand Theft Auto V System Requirements and GTA 5 requirements for PC Games -
for now, id go with a regular 2.5" ssd, m.2 models arent "there just yet"
@flamy: dont turn your back on the batman! he will come and eat your P650 at night!LunaP likes this. -
oh you guys are cruel.
*slinks back to watching gameplay vids on youtube* -
-
[email protected] Notebook Consultant
So I'm still trying to find out more on the NVMx spec, what has been implemented, etc... The earlier comments about the Z97 being the only one that supports NAND PCIe booting has piqued my interest, because it sounds like Intel didn't address various needs in the firmware, registers, etc... Again, any time one goes off the ATA/AHCI stack, one removes all sorts of established support. So there has to be equivalent (if not a superset, for maximum performance) subsystems in the NVMe stack of the firmware, boot loader, kernel, etc... to AHCI.
For most people, the read latency of any NAND technology is going to destroy platter. So the massive bump from a platter SATA to NAND SATA is going to be massive. But if you're looking for the same bump from NAND SATA to NAND PCIe, that's not going to happen.
As far as "aren't there just yet," remember, the P65x has two (2) available M.2 slots, one (1) PCIe.
So ... even if you have issues with booting from a NAND PCIe device, one can still use the other, NAND SATA device as one's boot. It all depends on how comfortable you are on manually setting up separate Windows System (BOOTMGR-NTOSKNRL) from Boot (\Windows), and the EFI System Partition (ESP) in the case of uEFI-GPT firmware-disk label.
But yes ... we haven't even begun to address the potential of NAND EEPROM devices versus spinning platters, because the AHCI/SATA interface was designed for the latter, and heavily inhibits and prevents proper support of the former. -
-
-
Hi
I want to order some RAM for the P650. Should i use DDR3 or DDR3L (low voltage).
What is the difference.?
Thanx -
DDR3L 1.35v
-
My choices are:
PC3-17000 (DDR3-2133) vs PC3-14900 (DDR3-1866) vs PC3-12800 (DDR3-1600).
€ 170 € 158 € 146
Geheugen CAS Latency 11 10 9
Should i notice some difference between those 3 with normal gaming and browsing/office.?
What is the best Price/Performence for me.. -
Forget the 1866Mhz version. Go for 2133Mhz version BTW what country are you from? You can buy Kingston hyperx 16gb 2133 for £108.65 Amazon uk
-
-
The new eta for the Sg is december 9th (Pcspecialist's answer to my enquiry)
-
Some dutch prices to compare with:
Crucial M500 2,5" 240GB = €92
Crucial MX100 256GB = €92
Crucial M500 M.2 240GB = €103
Are we so damn expensive here.? haha..
-
So what type of storage options are you folks going with? I'm debating on an mSATA OS drive with a 1TB Samsung Evo SSD OR just the 1TB Evo SSD. I've got a smaller SSD as an OS drive in my desktop and wish I had gone larger. I'm trying to avoid a mechanical drive.
-
In total RAM+ SSD =£188/238 (without delivery charge ) -
-
*** Official Clevo P65xSA/SE/SG / Sager NP8650/51/52 Owner´s Lounge ***
Discussion in 'Sager/Clevo Reviews & Owners' Lounges' started by jaybee83, Oct 13, 2014.