MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
file
Recherche

File handling in server-side JavaScript

mercredi 11 septembre 2024, 11:00 , par InfoWorld
Working with files on the server is a perennial need for developers. Server-side JavaScript platforms like Node, Deno, and Bun offer a flexible and fairly simple approach to doing things with files. This article shows you how to use the fs library to handle the most common file-handling needs. Examples include reading, writing, updating, and moving files in both synchronous and asynchronous modes, listing directories, and streaming files in chunks.

JavaScript’s filesystem (fs) library

In server-side JavaScript, you will most likely use the fs library for dealing with the filesystem. This library is a module in Node and other platforms like Bun. So you don’t need to install it using a package manager like NPM; you can just import it into your scripts using ES6 modules:

import fs from 'fs';

Or with Common.js syntax:

const fs = require('fs');

Newer version of Node also allow using the namespaced version:

const fs = require('node:fs');

Synchronous vs. asynchronous file handling

Once you have the fs object in hand, there are two ways to interact with the filesystem: synchronously or asynchronously.

Synchronous file handling makes for simpler code, but asynchronous file handling offers more opportunity for platform-level optimization because it doesn’t block execution.

When using async, you may use callbacks or promises. If you choose to use promises, you’ll also need to install and use the promise package ('node:fs/promises').

Creating a file

Let’s start by creating a file. Our write script will be called write.mjs. This is because Node wants the module extension.mjs when we use ES6 modules. The following file is called koan.txt and contains the text from a short Zen story:

import fs from 'node:fs';

const content = `A monk asked Chimon,
“Before the lotus blossom has
emerged from the water, what
is it?” Chimon said, “A lotus
blossom.” The monk pursued,
“After it has come out of the
water, what is it?' Chimon
replied, “Lotus leaves.'`;

try {
fs.writeFileSync('koan.txt', content);
console.log('Koan file created!');
} catch (error) {
console.error('Error creating file:', error);
}

Notice that when writing a file, we wrap everything in a try block in case there’s an error. The fs.writeFileSync() method makes it very simple: it just takes the filename and its contents.

You’ll also see the parts of fs imported with destructuring, for example:

import { writeFileSync } from ‘node:fs’;

I’m glossing over other fs capabilities for now, like setting file properties and encodings. The defaults work well for our needs here.

Now, let’s look at writing the file asynchronously. Assuming the content variable is the same, we could do this:

// writeAsync.js
const fs = require('fs');

const content = “...”;
const filename = 'asyncKoan.txt';

fs.writeFile(filename, content, (err) => {
if (err) {
console.error('Error writing file:', err);
} else {
console.log(`File '${filename}' written successfully!`);
}
});

This time, we imported fs using Common.js require, so the file can be a simple.js extension. Now we are using the async approach with callbacks. The callback accepts an argument, named err, which will be populated if an error is encountered.

Finally, here’s how to do the same thing using the promise-based approach:

// writePromise.js
const fs = require('node:fs/promises');

const content = '...';
const filename = 'promiseKoan.txt';

async function writeFileWithPromise() {
try {
await fs.writeFile(filename, content);
console.log(`File '${filename}' written successfully!`);
} catch (err) {
console.error('Error writing file:', err);
}
}

writeFileWithPromise();

In this example, we use async/await to take the promise and handle it in a synchronous style. We could also use the promise then/catch handers directly:

// writePromiseDirect.js
const fs = require('node:fs/promises');

const content = '...';
const filename = 'promiseKoan.txt';

fs.writeFile(filename, content).then(() => console.log(`File '${filename}' written successfully!`)).catch((err) => console.error('Error writing file:', err));

Promises provide the greatest degree of flexibility at the cost of just a bit more complexity. These examples give a sense of the main styles of working in fs:

Synchronous, with the *Sync methods

Async with callbacks

Async with Promises:

Using async/await

Using then/catch methods

Reading the file

Now let’s look at reading the koan.txt file. Here’s the synchronous approach:

// readFile.mjs
import { readFileSync } from 'node:fs';

let file = readFileSync('test.txt');
console.log(file.toString('utf8'));

If we run it, we get:

$ node readFile.mjs
A monk asked Chimon…

Notice we have to manually decode the file into a string using UTF8 encoding; otherwise, we get the raw buffer. Now, if we wanted to do this async with callbacks, we enter:

const fs = require('fs');

const filename = 'koan.txt';

fs.readFile(filename, (err, data) => {
if (err) {
console.error('Error reading file:', err);
} else {
console.log(data.toString('utf8'));
}
});

This lets us do the same work using an asynchronous callback.

You’ve seen how to use promises directly already, so I leave that as an exercise for you.

Updating a file

Updating a file just combines the two parts we’ve seen already: read a file, modify the content, and write out the content. Here’s an example:

const fs = require('fs');

const filename = 'koan.txt';

fs.readFile(filename, 'utf8', (err, data) => {
if (err) {
console.error('Error reading file:', err);
return;
}

const updatedContent = data.replace(/lotus blossom/g, 'water lily');

fs.writeFile(filename, updatedContent, (err) => {
if (err) {
console.error('Error writing file:', err);
} else { '${filename}' updated successfully!`);
}
});
});

In this example, we are using callback-style async. Generally, it’s preferred to use an asynchronous approach when updating files; this avoids blocking the event loop in the two steps of reading and writing.

We open the file, and in the callback perform the write operation using the content of the file. Of course, you could use promise then/catch to do the same kind of thing.

Text data formats

If you need to read a text file and parse it as structured data, then you can simply operate on the string you recover from the file. For example, if you’ve read a JSON file, you can parse it:

let myJson = JSON.parse(data);

Modify it:

myJson.albumName = “Kind of Blue”;

And then write out the new info using JSON.stringify(myJson).

A similar process could be used for other formats, like YAML, XML, and CSV, although those would require a third-party parsing library to handle effectively.

Deleting a file

Deleting a file is a simple operation. Here’s the synchronous approach, which usually is adequate:

const fs = require('node:fs');

const filename = 'koan.txt';

try {
fs.unlinkSync(filename);
console.log(`File '${filename}' deleted successfully!`);
} catch (err) {
console.error('Error deleting file:', err);
}

Because fs is modeled around POSIX style operations, the deletion of the file is called “unlinkSync.” The file is “unlinked” in the file system, and thus it is deleted.

Handling non-text files in JavaScript

Text-encoded files are the most common, but JavaScript can also handle binary files. When working with binary files like images or audio files (or something more exotic like a custom game storage format or firmware update), we have to deal more directly with the buffer. (A buffer is a temporary memory location for moving data.)

A simple example will give you a sense of how it works. Most of the time, we’ll read binary data created by some other source, then perform operations on it, like compression or submitting to a learning algorithm, and then output it again to another source. For our simple example, we can create a fake binary file:

const fs = require('fs');

const filename = 'binary.bin';

const buffer = Buffer.from([0xDE, 0xAD, 0xBE, 0xEF]);

fs.writeFile(filename, buffer, (err) => {
if (err) {
console.error('Error writing file:', err);
return;
}

console.log(`Binary file '${filename}' written successfully!`);
});

This is a simplified example because in real use we’d need to deal with whatever format we were using, like JPEG or MP3. But this example gives us something to read:

const fs = require('fs');

const filename = 'binary.bin';

fs.readFile(filename, (err, data) => {
if (err) {
console.error('Error reading file:', err);
return;
}
console.log(data);

// process the Buffer data using Buffer methods (e.g., slice, copy)
});

Streaming files in JavaScript

Another facet of dealing with files is streaming in chunks of data, which becomes a necessity when dealing with large files. Here’s a contrived example of writing out in streaming chunks:

const fs = require('fs');

const filename = 'large_file.txt';
const chunkSize = 1024 * 1024; // (1)
const content = 'This is some content to be written in chunks.'; // (2)
const fileSizeLimit = 5 * 1024 * 1024; // // (3)

let writtenBytes = 0; // (4)

const writeStream = fs.createWriteStream(filename, { highWaterMark: chunkSize }); // (5)

function writeChunk() { // (6)
const chunk = content.repeat(Math.ceil(chunkSize / content.length)); // (7)

if (writtenBytes + chunk.length fileSizeLimit) {
console.error('File size limit reached');
writeStream.end();
return;
}
console.log(`Wrote chunk of size: ${chunk.length}, Total written: ${writtenBytes}`);
}
}

writeStream.on('error', (err) => { // (10)
console.error('Error writing file:', err);
});

writeStream.on('finish', () => { // (10)
console.log('Finished writing file');
});

writeChunk();

Streaming gives you more power, but you’ll notice it involves more work. The work you are doing is in setting chunk sizes and then responding to events based on the chunks. This is the essence of avoiding putting too much of a huge file into memory at once. Instead, you break it into chunks and deal with each one. Here are my notes about the interesting parts of the above write example:

We specify a chunk size in kilobytes. In this case, we have a 1MB chunk, which is how much content will be written at a time.

Here’s some fake content to write.

Now, we create a file-size limit, in this case, 5MB.

This variable tracks how many bytes we’ve written (so we can stop writing after 5MB).

We create the actual writeStream object. The highWaterMark element tells it how big the chunks are that it will accept.

The writeChunk() function is recursive. Whenever a chunk needs to be handled, it calls itself. It does this unless the file limit has been reached, in which case it exits.

Here, we are just repeating the sample text until it reaches the 1MB size.

Here’s the interesting part. If the file size is not exceeded, then we call writeStream.write(chunk):

writeStream.write(chunk) returns false if the buffer size is exceeded. That means we can’t fit more in the buffer given the size limit.

When the buffer is exceeded, the drain event occurs, handled by the first handler, which we define here with writeStream.once('drain', writeChunk);. Notice that this is a recursive callback to writeChunk.

This keeps track of how much we’ve written.

This handles the case where we are done writing and finishes the stream writer with writeStream.end();.

This demonstrates adding event handlers for error and finish.

And to read it back off the disk, we can use a similar approach:

const fs = require('fs');

const filename = 'large_file.txt';
const chunkSize = 1024 * 1024; // 1 MB chunk size again

const readStream = fs.createReadStream(filename, { highWaterMark: chunkSize });

let totalBytesRead = 0;

readStream.on('data', (chunk) => {
totalBytesRead += chunk.length;
console.log(`Received chunk of size: ${chunk.length}, Total read: ${totalBytesRead}`);
// Other work on chunk
});

readStream.on('error', (err) => {
console.error('Error reading file:', err);
});

readStream.on('end', () => {
console.log('Finished reading file');
});

The bullet points on writing a stream can help us understand this snippet, as well. Reading is even simpler because we don’t have to keep track of the file size limit.

Conclusion

This was a quick look at the fs module’s file-handling capability. Although we’ve covered a small portion of the API, we have all the essential elements, especially for the most common needs of interacting with text files. 

With just the basic features I’ve shown, you can deal with almost all your daily file-handling needs in server-side JavaScript. With further elaboration, fs will do just about anything else you’ll ever need.
https://www.infoworld.com/article/3504664/file-handling-in-server-side-javascript.html

Voir aussi

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Date Actuelle
sam. 16 nov. - 01:53 CET