Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

PC Prob- 200GB free, not enough memory to copy 7GB

Status
Not open for further replies.

dknguyen

Well-Known Member
Most Helpful Member
Hello. Im having a problem with my external harddrive. Im trying to copy a single 7GB file to it, but it always says not enough memory and the disc cleanup tries to clean up the external HD which has 200 gigs free. There is 14 gigs free on my internal HD.

Does anyone know why this is? It can copy a bunch of smaller files fine, but if a single file is too large it gives this message.
 
Your local hard drive is formated NTFS most likley, and the external hard drive is formatted FAT32. FAT32 has a maximum single file size of 4GiB. The only way to store files that large on FAT32 is to split them up into more than one file. If you absolutly require storing files over 4GiB on the drive then you're going to have to repartition it into a main FAT32 section for most files and NTFS for large files, or the whole thing to NTFS. NTFS is relativly well supported by the Nix's nowdays so there's not much of a drawback unless you still need the drive to be accessable under Windows 98. Not sure about MacOS
 
Last edited:
Damn, I didn't notice that the external HD came formatted in FAT32 when I bought it. I just started using it. Oh well, it was an ISO file so I mounted it and copied those instead which broke up the files without zipping or anything.

I still need a new computer anyways, with way more harddrives that are much bigger. I bought a USB2 card to salvage the data transfer from the one I have right now and it's faster than USB...but not nearly as fast as it was supposed to be.

Transferring 7 gigs took about 212 minutes (observation) with USB, it now taikes 30 minutes with USB2 (observation), but it was supposed to take 3 minutes doing the math. I guess the USB was running way slower than it's theretical maximum too...should have gotten a Firewire HD...
 
firewire is slower than its advertised 400mbit speed as well.

those speeds, 480 for usb2, 400 for firewire1, are peak burst speeds, there's no way the hardware could sustain that kind of transfer.

I have had decent luck getting ~200mbps from firewire linking two PCs with it, but I now run gigabit ethernet and see 300-400mbit from it :)
 
dknguyen. USB max theroetical speed asumees two things. One you're only counting the absolute bit speed and NOT the protocols overhead in the transfer rate and two, you're asuming there are no other devices on the bus. Keyboards aren't usually too bad, but USB mice can massivly throttle transfer speeds. I'm not talking ports either, most PC's only come with two USB busses. You should take stock of all your USB devices and detmine what should be on what host, and where those hosts are physically located and mapped to in the software. Kind of like the MPG on a car's EPA basement. No one EVER gets the EPA MPG basement in the real car, it's not physically possible.
 
I don't know about Mac OSX but Linux and BSD can only support ntfs in read only mode. If you want read write there's the captive project but you need a Windows license to leagally use the drivers.

You can get a freeware Ext2 driver for Windows which is also a possibility.
 
Hero999 said:
I don't know about Mac OSX but Linux and BSD can only support ntfs in read only mode. If you want read write there's the captive project but you need a Windows license to leagally use the drivers.

I wouldn't worry about NTFS, I've been playing with a Firewire/USB external drive box, and if I use NTFS it trashes the drive every time you re-boot the computer. I eventually gave up and tried FAT32, and it's been perfect ever since.
 
NTFS is finacky. That's why it's not wlel supported under Linux/BSD/OS X although upon looking up in wiki and on google there are implementations of read/write access on all of those systems with various results, seems most of them leave errors (not fatal) although depending on what's done corruption of the file system is a possbility, one of the reasons I don't like NTFS. I think it's due to the fact that NTFS is a journalling file system and it's complete operation are public knowledge, so creating fulll R/W access is not easy.

Just ran across this bit on reading the Wiki entry further.
https://en.wikipedia.org/wiki/NTFS-3G
 
Last edited:
Im just going to get a new computer. Last thing left to pick is the motherboard I was looking at the Asus P5K Deluxe (I'd go with the P5K3 if only it supported DDR2). But it doesn't seem to be available in Canada yet.

And the Gigabyte P35-DQ6 looks good...but it seems like marketing a lot of it so I dont quite trust them.
 
A new computer won't fix the problem if it's formated FAT32 still =)
 
Hero999 said:
I don't know about Mac OSX but Linux and BSD can only support ntfs in read only mode. If you want read write there's the captive project but you need a Windows license to leagally use the drivers.

You can get a freeware Ext2 driver for Windows which is also a possibility.

Well... Linux has supported Full read/write to NTFS for a very long time now :D
The kernel-level driver which use to have only read-only and VERY dodgy write went a step further 6months ago and had write as well ONLY to existing files and not increasing file-size (ie next to useless)

CAptive-FS which used WINE & ntfs.sys from an XP installation did work but was painfully SLOW...


The solution that came about 6months ago (on the back of the kernel-space driver) is the USERLAND driver ntfs-3g
http://www.ntfs-3g.org/
Fully open-source (ie no ntfs.sys from a windows install), fully save and fully functional. Its very good as well. Since in userland (via FUSE) it is slightly slower then the kernel-driver BUT a million times faster then captive-FS
 
I have a friend that just built a pc, and he bought 8 maxtor internal harddisks.
Hooked them all together in a raid config. Now he has 1 terabyte of space.
 
Imnewtothis said:
I have a friend that just built a pc, and he bought 8 maxtor internal harddisks.
Hooked them all together in a raid config. Now he has 1 terabyte of space.

Pity he used Maxtor drives though?, in my experience they are VERY prone to failure - I also know a guy who ran a small ISP business, and bought a batch of Maxtor drives for his servers - he had HUGE failures of them during the first 12 months.
 
They're crap alright.

I had a 40GB Maxtor and it faild whilst my computer was running!

It made a crazy buzzing noise, my PC carried on working (as everything was still in RAM) so I just assumed some dirt was cought in the fan, until I tried to reboot and it totally failed. The BIOS screen gave a bad superblock error, I did some research and tried fixing it using Knoppix but it was dead alright.
 
Nigel Goodwin said:
Pity he used Maxtor drives though?, in my experience they are VERY prone to failure - I also know a guy who ran a small ISP business, and bought a batch of Maxtor drives for his servers - he had HUGE failures of them during the first 12 months.

I've been through two (didn't buy them, got them in secondhand computers) and my neighbor lost one. Two weeks after he bought it, too.

In my experience, Western Digital and Segate are both good for hard drives. I have WD hard drives in Macs that are 12 years old and still running fine. Go figure.
 
A Maxtor drive in my brother in laws machine went bad on me. I've used mostly Western Digital as they're a good price/performance product.
 
uh, well, all drives are prone to failure ... it depend's on many things, temperature, power supply capacity, "bad user errors" lol, ... It's not about the brand ... it's mostlly about the user, but, some brands offer stronger products, others don't ...
 
Actually it can be about brand. Everyone has bad days, companies are no different, and they don't usually disclose failure rates publicly, just look at the whole Xbox failure fiasco Microsoft is having trouble with. They just took a billion dollar hit extending the warranty to three years. That is virtually unheard of. They're just trying to protect themselves from lawsuits from people that bought consoles when MS knew there was a problem with the units.

Quiet a few years ago I was helping my father on a Novell fault tollerant server setup back when that stuff was brand spanking new and we ended up getting a bad batch of Seagate drives which caused massive problems. There were 10 drives in a Raid 0/1 setup and we had drives with massive bad sector blocks pop up on multiple drives, and after replacement another set of drives went, and then another... These were physical media failures.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top