| ▲ | DIY NAS: 2026 Edition(blog.briancmoses.com) |
| 165 points by sashk 7 hours ago | 63 comments |
| |
|
| ▲ | mvkel 3 hours ago | parent | next [-] |
| Wait. You build a new one every -year-?! How does one establish the reliability of the hardware (particularly the aliexpress motherboard), not to mention data retention, if its maximum life expectancy is 365 days? |
| |
| ▲ | p1necone 3 hours ago | parent [-] | | Looks like they built a new NAS, but kept using the same drives. Which given the number of drive bays in the NAS probably make up a large majority of the overall cost in something like this. Edit: reading comprehension fail - they bought drives earlier, at an unspecified price, but they weren't from the old NAS - I agree, when lifetimes of drives are measures in decades and huge amounts of tbw it seems pretty silly to buy new ones every time. | | |
|
|
| ▲ | starky an hour ago | parent | prev | next [-] |
| I think the worry about power consumption is a bit overblown in the article. My NAS has an i5-12600 + Quadro P4000 and uses maybe 50% more power than the one in this article under normal conditions. That works out to maybe $4/month more cost. Given the relatively small delta, I'd encourage picking hardware based on what services you want to run. |
| |
| ▲ | silversmith an hour ago | parent | next [-] | | Less power, less heat. Less heat, less cooling required. At some point that allows you to go fanless, and that's very beneficial if you have to share a room with the device. | |
| ▲ | dontlaugh an hour ago | parent | prev [-] | | It depends how much electricity costs where you live. I’m quite pleased mine idles at ~15W. |
|
|
| ▲ | VTimofeenko 5 hours ago | parent | prev | next [-] |
| Built a NAS last winter using the same case. Temps for HDDs used to be in mid-50s C with no fan and about 40 with the stock fan. The case-native backplane thingamajig does not provide any sort of pwm control if the fan is plugged in, so it's either full blast or nothing. I swapped the fan for a Thermalright TL-B12 and the HDDs are now happily chugging along at about 37 with the fan barely perceptible. Hddfancontrol ramps it up based on the output of smartctl. Case can actually fit a low-profile discrete GPU, there's about half height worth of space. |
|
| ▲ | mzhaase 3 hours ago | parent | prev | next [-] |
| I would like to point people to the Odroid H4 series of boards. N97 or N355, 2*2.5GbE, 4*SATA, 2 W in idle. Also has extension boards to turn it into a router for example. The developer hardkernel also publishes all relevant info such as board schematics. |
| |
| ▲ | kajika91 2 hours ago | parent [-] | | I also have an older Odroid HC4, it's been years it is running smoothly and not only I cannot use 1000$ for a NAS as the current post implied but the power consumption seems crazy to me for a mere disk-over-network usage (using a 500W power supply). I like the extensive benchmark from hardkernel, the only issue is that any ARM-based product is very tricky to boot and the only savior is armbian. |
|
|
| ▲ | evanjrowley 2 hours ago | parent | prev | next [-] |
| I would have chosen the i3-n305 version of that motherboard because it has In-Band ECC (IBECC) support - great for ZFS. IBECC is very underrated feature that doesn't get talked about enough. It may be available for the N150/N355, but I have never seen a confirmation. |
| |
| ▲ | zenoprax 2 hours ago | parent | next [-] | | Can you explain why ECC is great for ZFS in particular as opposed to any other filesystem?
And if the data leaves the NAS to be modified by a regular desktop computer then you lose the ECC assurance anyway, don't you? | | |
| ▲ | supermatt an hour ago | parent | next [-] | | ZFS is about end-to-end integrity, not just redundancy. It stores checksums of data when writing, checks them when reading, and can perform automatic restores from mirror members if mismatches occur. During writes, ZFS generates checksums from blocks in RAM. If a bit flips in memory before the block is written, ZFS will store a checksum matching the corrupted data, breaking the integrity guarantee. That’s why ECC RAM is particularly important for ZFS - without it you risk undermining the filesystem’s end-to-end integrity. Other filesystems usually lack such guarantees. | |
| ▲ | adastra22 an hour ago | parent | prev [-] | | The oversimplified answer is that ZFS’ in-memory structures are not designed to minimize bitflip risk, as some file systems are. Content is hashed when written to memory cache, but it can be a long time before it then gets to disk. Very little validation is done at that point to protect against writing bad data. |
| |
| ▲ | Alive-in-2025 2 hours ago | parent | prev [-] | | what is the impact on performance, does it require special ram? just heard about this here | | |
| ▲ | gforce_de 2 hours ago | parent [-] | | sorry for the german comment - ECC is mandatory! Obligatorische Pastete:
"16GB Ram sind Flischt, ohne wenn und aber. ECC ist nicht Flischt aber ZFS ist dafür ausgelegt. Wenn in Strandnähe Daten gelesen werden und es kommt irgendwie was in den Arbeitsspeicher, könnte eine eigentlich intakte Datei auf der Festplatte mit einem Fehler "korrigiert" werden. Also ECC ja. Das Problem bei ECC ist nicht der ECC-Speicher an sich, der nur wenig mehr als konventioneller Speicher kostet, es sind die Mutterbretter, die ECC unterstützen. Aufpassen bei AMD: Oft steht dabei, dass ECC unterstützt wird. Gemeint ist aber, dass ECC-Speicher läuft, aber die ECC-Funktion nicht genutzt wird. LOL. Die meisten MBs mit ECC sind Serverboards. Wer nichts gegen gebrauchte Hardware hat, kann z.B. mit einem alten Sockel 1155-Xeon mit Asus-Brett ein Schnäppchen machen. Ansonsten ist die Asrock Rack-Reihe zu empfehlen. Teuer, aber stromsparend. Generell Nachteil bei Serverboards: Die Bootzeit dauert eine Ewigkeit. Von Consumerboards wird man mit kurzen Bootzeiten verwöhnt, Server brauchen da oft mal 2 Minuten, bis der eigentliche Bootvorgang beginnt. Bernds Server besteht also aus einem alten Xeon, einem Asus Brett, 16GB 1333Mhz ECC-Ram und 6x 2TB-Platten in einem RaidZ2 (Raid6).6TB sind Netto nutzbar. Ich mag alte Hardware irgendwie. Ich reize Hardware gerne bis zum Gehtnichtmehr aus. Die Platten sind auch schon 5 Jahre alt, machen aber keine Mucken. Geschwindigkeit ist super, 80-100MB/s über Samba und FTP. Ich lasse den Server übrigens nicht laufen, sondern schalte ihn aus, wenn ich ihn nicht brauche. Was noch? Komression ist super. Obwohl ich haupsächlich nicht weiter komprimierbare Daten speichere (Musik, Videos), hat mir die interne Kompression 1% Speicherplatz beschert. Bei 4TB sind das ca. 40GB Platz gespart. Der Xeon langweilt sich trotzdem ein bisschen. Testweise habe ich gzip-9-Komprimierung getestet, da kam er dann schon ins Schwitzen." | | |
|
|
|
| ▲ | speff 6 hours ago | parent | prev | next [-] |
| Q - assuming the NAS was strictly used as NAS and not as a server with VMs, is there a point in having a large amount of RAM? (large as in >8GB) I'm not sure what the benefit would be since all it's doing is moving information from the drives over to the network. |
| |
| ▲ | firecall 3 hours ago | parent | next [-] | | I am not at all an expert, I can only share my anecdotal unscientific observations! I'm running a TrueNAS box with 3x cheap shucked Seagate drives.* The TrueNAS box has 48GB RAM, is using ZFS and is sharing the drives as a Time Machine destination to a couple of Macs in my office. I can un-confidently say that it feels like the fastest TM device I've ever used! TrueNAS with ZFS feels faster than Open Media Vault(OMV) did on the same hardware. I originally setup OMV on this old gaming PC, as OMV is easy. OMV was reliable, but felt slow compared to how I remembered TrueNAS and ZFS feeling the last time I setup a NAS. So I scrubbed OMV and installed TrueNAS, and purely based on seat-of-pants metrics, ZFS felt faster. And I can confirm that it soaks up most of the 48GB of RAM! TrueNAS reports ZFS Cache currently at 36.4 GiB. I dont know why or how it works, and it's only a Time Machine destination, but there we are those are my metrics and that's what I know LOL * I don't recommend this.
They seem unreliable and report errors all the time.
But it's just what I had sitting around :-)
I'd hoped by now to be able to afford to stick 3x 4TB/8TB SSDs of some sort in the case, but prices are tracking up on SSDs... | |
| ▲ | mewse-hn 6 hours ago | parent | prev | next [-] | | ZFS uses a large amount of ram, i think the old rule of thumb was 1GB ram per 1TB of storage | | |
| ▲ | yjftsjthsd-h 5 hours ago | parent | next [-] | | That's only for deduplication. https://superuser.com/a/993019 | | |
| ▲ | Lammy 5 hours ago | parent | next [-] | | I do like to deduplicate my BitTorrent downloads/seeding directory with my media directories so I can edit metadata to my heart's content while still seeding forever without having to incur 2x storage usage. I tune the `recordsize` to 1MiB so it has vastly fewer blocks to keep track of compared to the default 128K, at the cost of any modification wasting very slightly more space. Really not a big deal though when talking about multi-gibibyte media containers, multi-megapixel art embeds, etc. | | | |
| ▲ | ekropotin 2 hours ago | parent | prev [-] | | ZFS also uses RAM for read through cache aka ARC.
However, I’m not sure how noticeable the effect from increased RAM would be - I assume it mostly benefit for read patterns with high data reuse, which is not that common. |
| |
| ▲ | 01HNNWZ0MV43FF 6 hours ago | parent | prev [-] | | Huh. More than just the normal page cache on other filesystems? | | |
| ▲ | WarOnPrivacy 6 hours ago | parent | next [-] | | Yes. Parent's comment matches everything I've heard. 32GB is a common recommendation for home lab setups. I run 32 in my TrueNAS builds (36TB and 60TB). | | |
| ▲ | magicalhippo 2 hours ago | parent [-] | | You can run it with much less. I don't recall the bare minimum but with a bit of tweaking 2GB should be plenty[1]. I recall reading some running it on a 512MB system, but that was a while ago so not sure if you can still go that low. Performance can suffer though, for example low memory will limit the size of the transaction groups. So for decent performance you will want 8GB or more depending on workloads. [1]: https://openzfs.github.io/openzfs-docs/Project%20and%20Commu... |
| |
| ▲ | tekla 6 hours ago | parent | prev [-] | | ZFS will eat up as much RAM as you give it as it caches files in memory as accessed. | | |
| ▲ | ac29 5 hours ago | parent [-] | | All filesystems do this (at least all modern ones, on linux) |
|
|
| |
| ▲ | PikachuEXE 6 hours ago | parent | prev | next [-] | | If you use ZFS you might need more RAM for performance? | |
| ▲ | loloquwowndueo 6 hours ago | parent | prev | next [-] | | Caching files in ram means they can be moved to the network faster - right? | | |
| ▲ | ac29 5 hours ago | parent | next [-] | | Depends on the network speed. At 1Gbps a single HDD can easily saturate the network with sequential reads. A pair of HDD could do the same at 2.5Gbps. At 10Gbps or more, you would definitely see the benefits of caching in memory. | | |
| ▲ | butvacuum 5 hours ago | parent [-] | | Not as much as expected. I have several toy ZFS pools out of ancient 3tb wd reds, and anything remotely home-grade (stripped mirrors, 4,6,8 wide raidz1/2) saturates the disks before 10gig networking. As long as it's sequential, 8gb or 128gb doesn't matter. |
| |
| ▲ | speff 6 hours ago | parent | prev [-] | | Makes sense. I didn't know if the FS used RAM for this purpose without some specialized software. PikachuEXE and Mewse mentioned ZFS. Looks like it has native support for caching frequent reads [0]. Good to know [0]: https://www.truenas.com/docs/references/l2arc/ |
| |
| ▲ | justsomehnguy 5 hours ago | parent | prev [-] | | As the other said already if you have more RAM you can have more cache. Honestly it's not that needed but if you would really use the 10Gbit+ networking then 1 second is ~125Mbytes. So depending on your usage you can never even more than 15% utilization or have it almost all if you constantly running something on it ie torrents or using it a SAN/NAS for VM on some other machine. But for a rare occasional home usage nor 32Gb nor this monstrosity and complexity doesn't make sense - just buy some 1-2 bay Synology and forget about it. | | |
|
|
| ▲ | fmajid an hour ago | parent | prev | next [-] |
| I upgraded my home backup server a couple of months ago to a Minisforum N5 Pro, and am very happy with it. It only has 4 3.5” drive slots, but I only use two with 2x20TB drives mirrored, and two 14TB external drives for offsite backups. The AMD AI 370 CPU is plenty fast so I also run Immich on it, and it has ECC RAM and 10G Ethernet. |
|
| ▲ | zdw 4 hours ago | parent | prev | next [-] |
| The Jonsbo N3 case which is 8x 3.5" drives has a smaller footprint than this, which might be better for most folks. Needs a SFX PSU though, which is kind of annoying. If you get an enterprise grade ITX board that has a 16x PCIe slot which can be bifurcated into 4 M.2 form factor PCIx4 connections, it really opens up options for storage: * A 6x SATA card in M.2 form factor from Asmedia or others will let you fill all the drive slots even if the logic board only has 2/4/6 ports on it. * The other ports can be used for conventional M.2 nVME drives. |
| |
| ▲ | ehnto 4 hours ago | parent [-] | | That's what I built! It's a great case, the only components I didn't already have lying around were the motherboard and PSU. It's very well made, not as tight on space as I expected either. The only issue is as you noted, you have to be really careful with your motherboard choice if you want to use all 8 bays for a storage array. Another gotchas was making sure to get a CPU with integrated graphics, otherwise you will have to waste your pcie slot on a graphics card and have no space for the extra SATA ports. |
|
|
| ▲ | dllu 6 hours ago | parent | prev | next [-] |
| Very sad that HDDs, SSDs, and RAM are all increasing in price now, but I just made a 4 x 24 TB ZFS pool with Seagate Barracudas on sale at $10 / TB [1]. This seems like a pretty decent price even though the Barracudas are rated for 2400 hours per year [2] but this is the same spec that the refurbished Exos drives are rated for. By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum). Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30. [1] https://www.newegg.com/seagate-barracuda-st24000dm001-24tb-f... [2] https://www.seagate.com/content/dam/seagate/en/content-fragm... |
| |
| ▲ | WarOnPrivacy 5 hours ago | parent | next [-] | | > $10 / TB That's a remarkably good price. If I had $1.5k handy I'd be sorely tempted (even tho it's Seagate). | | |
| ▲ | rubatuga 5 hours ago | parent [-] | | I've recently shucked some Seagate HAMR 26Tb drives hopefully they last |
| |
| ▲ | ghthor 5 hours ago | parent | prev [-] | | Not surprised by the fan, once I went noctua I didn’t go back. |
|
|
| ▲ | exmadscientist 5 hours ago | parent | prev | next [-] |
| Are there any NAS solutions for 3.5" drives, homebrew or purchased, that are slim enough to stash away in a wall enclosure? (This sort of thing: https://www.legrand.us/audio-visual/racks-and-enclosures/in-... , though not that particular model or height.) I'd like to really stash something away and forget about it. Height is the major constraint, you can only be ~3.5" tall. And before anyone says anything about 19" rack stuff, don't bother. It's close but just doesn't go, especially if it's not the only thing in the enclosure. |
| |
| ▲ | jmb99 5 hours ago | parent | next [-] | | > And before anyone says anything about 19" rack stuff, don't bother. It's close but just doesn't go, especially if it's not the only thing in the enclosure. Do you have to use that particular wall enclosure thing? A 1U chassis at 1.7” of height fits 4 drives (and a 2U at ~3.45” fits 12), and something like a QNAP is low-enough power to not need to worry about cooling too much. If you’re willing to DIY it would not be hard at all to rig up a mounting mechanism to a stud, and then it’s just a matter of designing some kind of nice-looking cover panel (wood? glass in a laser-cut metal door? lots of possibilities). I guess my main question is, what/who is this for? I can’t picture any environment that you have literally 0 available space to put a NAS other than inside a wall. A 2-bay synology/qnap/etc is small enough to sit underneath a router/AP combo for instance. | | |
| ▲ | exmadscientist 4 hours ago | parent [-] | | > Do you have to use that particular wall enclosure thing? It's already there in the wall. All the Cat5e cabling in the house terminates there, so all the network equipment lives in there, which makes me kind of want to also put the NAS in there. |
| |
| ▲ | butvacuum 5 hours ago | parent | prev [-] | | 1 liter PC's (tiny/mini/micro), or some N100 type build + external bay is likely your best bet. If it's really that small, you might have heat issues. |
|
|
| ▲ | dbalatero 5 hours ago | parent | prev | next [-] |
| I researched a bunch of cases recently and the Jonsbo, while it looked good, came up as having a ton of issues with airflow to cool the drives. Because of this, I ended up buying the Fractal Node 804 case, which seemed to have a better overall quality level and didn't require digging around AliExpress for a vendor. |
| |
| ▲ | no_time 3 hours ago | parent [-] | | lol same. All my parts arrived except the 804. The supply chain for these cases appears to be imploding where I live (Hungary). The day after I ordered it either went out of stock or went up by +50% in all webshops that are reputable here. I’m still a bit torn on whether I made the good call of getting 804 or the 304 wouldve been a enough for a significantly smaller footprint and -2 bays. Hard to tell without seeing them in person lol. Are you satisfied with it? Any issues that came up since building? |
|
|
| ▲ | p1mrx 4 hours ago | parent | prev | next [-] |
| I recently got a used QNAP TS-131P for cheap, that holds one 3.5" drive for offsite backup at a friend's house. It's compact and runs off a common 12V 3A power supply. There is no third-party firmware available, but at least it runs Linux, so I wrote an autorun.sh script that kills 99% of the processes and phones home using ssh+rsync instead of depending on QNAP's cloud: https://github.com/pmarks-net/qnap-minlin |
|
| ▲ | disambiguation 5 hours ago | parent | prev | next [-] |
| I too was in the market recently for a NAS, downgrading from a 12 bay server because of yagni - it's far too big, too loud, runs hot, and uses way too much energy. I was also tempted by the jonsbo (it's a very nice case) but prices being what they are it was actually better to get a premade 4 bay model for under $500 (batteries included, hdds are not). It's small, quiet, power efficient, and didnt break the bank in the process. Historically DIY has always been cheaper, but that's no longer the case (no pun intended) |
|
| ▲ | WarOnPrivacy 6 hours ago | parent | prev | next [-] |
| I have built 2 NAS that borrow ideas from his blogs. One uses the Silverstone CS382 case (6x 6TB SAS) and the other uses a Topton N5105 Mini-ITX board (6x 10TB SATA). I'm quite happy with both. ref: https://blog.briancmoses.com/2024/07/migrating-my-diy-nas-in... |
|
| ▲ | aetherspawn 2 hours ago | parent | prev | next [-] |
| Obligatory comment every time one of these threads comes up that Synology, sure, the hardware is a bit dated but… as far as set and forget goes: I’ve run multiple Synology NAS at home, business, etc. and you can literally forget that it’s not someone else’s cloud. It auto updates on Sundays, always comes online again, and you can go for years (in one case, nearly a decade) without even logging into the admin and it just hums along and works. |
| |
| ▲ | PeterStuer 2 hours ago | parent | next [-] | | Until you get the blue flashing light of dead. Luckily I was able to source an identical old model of eBay to transfer the disks to. | |
| ▲ | imiric 31 minutes ago | parent | prev [-] | | What makes you think that Synology hardware is special in that sense? Most quality hardware will easily last decades. I have servers in my homelab from 2012 that are still humming along just fine. Some might need a change of fans, but every other component is built to last. |
|
|
| ▲ | jaimex2 4 hours ago | parent | prev | next [-] |
| What's the plan if your house burns down? |
| |
|
| ▲ | DeathArrow 3 hours ago | parent | prev | next [-] |
| I wonder how many consumer level HDDs in RAID5 will take to saturate a 10Gbps connection. My napkin math says that from 1,250 MB/s we can achieve around 1,150 MB/s due to network overhead so it means about 5 Red Pro/ Ironwolf Pro (reading at about 250–260 MB/s each) in RAID5 to saturate the connection. |
| |
| ▲ | ekropotin 2 hours ago | parent [-] | | I though raid5 is highly discouraged | | |
| ▲ | Mashimo an hour ago | parent [-] | | I can't remember the details, but was that not specifically for hardware raid controllers? 2000s style. I think for home use with MDADM or raid z2 on zfs it's just gucci. It's cost effective. |
|
|
|
| ▲ | pSYoniK 43 minutes ago | parent | prev [-] |
| TL;DR - please stop wasting tons of resources putting together new servers every year and turning this into yet another outlet for "I have more money than sense and hopefully I can buy myself into happiness". Just get old random hardware and play around with it and you'll learn so much that you will be able to truly appreciate the difference between consumer and enterprise hardware. This seems awfully wasteful. One of the main reasons for which I've built my own homeserver was to reduce resource usage - one could probably argue that the carbon footprint of keeping your photos in the cloud and running services is lower than building your own little datacentre copy locally and where would we be if everyone builds their own server, then what? Well, I think that paying Google/Apple/Oracle/etc whoever money so that they continue their activities has a bigger carbon footprint than me picking up old used parts and running them on a solar/wind only electricity plan. I also think I'm going a bit overboard with this and I'm not suggesting to vote with your wallet because that doesn't work. If you want real change this needs to come from the government. You not buying a motherboard won't stop a corporation from making another 10 million. Anyway, except for the hard drives, all components were picked up used. I like to joke it's my little Frankenstein's monster, pieced together from discarded parts no one wanted or had any use for. I've also gone down the rabbit hole to build the "perfect" machine, but I guess I was thinking too highly of myself and the actual use case. The reason I'm posting this is to help someone who might not build a new machine because they don't have ECC and without ECC ZFS is useless and you need Enterprise drives and you want 128 GB of RAM in the machine and you could also pick up used enterprise hardware and you could etc... If you wish to play around with this, the best way is to just get into it. The same way Google started with consumer level hardware so can you. Pick up a used motherboard, pick up some used ram, a used CPU, throw them into a case and let it rip. Initially you'll learn so much and that alone is worth every penny. When I built my first machine, I wasn't finding any decently used former desktop form hp/lenovo/dell so I found a used i5 8500t for about 20$, 8 gb of ram for about 5$, a used motherboard for 40$, case was 20$ and PSU was $30. All in all the system was 115$ and for storage I used an old 2.5inch ssd for boot drive and 2 new NAS hard drives (which I still have btw!). This was amazing. Not having ECC, not having a server motherboard/system, not worrying about all that stuff allowed me to get started. The entry bar is even lower now, so just get started, don't worry. People talk about flipped bits as if it happens all day every day. If you are THAT worried, then yeah, look for a used server barebone or even a used server with support for ecc and do use ZFS, but I want to ask, how comfortable are you making the switch 100% now over night without having ever spent any time configuring even the most basic server that NEEDS to run for days/weeks/months? Old/used hardware can bridge this gap and when you're ready it's not like you have to throw out the baby with the bathwater. You now have another node in a proxmox cluster. Congrats! The old machine can run LXCs, VMs, it could be a firewall it could do anything and when it fails, no biggie. Current setup for those interested: i7 9700t 64 GB DDR4 (2x32) 8, 10, 12, 12, 14 TB HDDs (snapraid setup and 14 TB HDD is holding parity info) X550 T2 10Gbps network card Fractal Design Node 804 Seasonic Gold 550watts LSI 9305 16i |
| |
| ▲ | imiric 22 minutes ago | parent [-] | | It's a bit patronizing to tell people what to do with their money. If you care more about the environment than enjoying technology, then go ahead and do what you suggest. If you want to be really green, how about giving up technology altogether? Go full vegan, abandon all possessions, and all that? Or if you really want to help the planet, have you considered suicide? There's always more you can do. I'd rather enjoy my life, and not tell others how to enjoy theirs, unless it's impacting mine. Especially considering that the impact of a single middle-class individual pales in comparison to the impact of corporations and absurdly wealthy individuals. Your rant would be better served to representatives in government than tech nerds. |
|