napp-it SE Solaris/Illumos Edition
- ohne Support frei nutzbar
- kommerzielle Nutzung erlaubt
- kein Speicherlimit
- freier Download für End-User
napp-it cs client server
napp-it SE und cs
- Individual support and consulting
- Bugfix/ Updates to newest releases and bugfixes/li>
- Redistribution/Bundling/Installation on demand allowed
Request a quote Angebot an.
Details: Featuresheet.pdf
Napp-it ToGo VM Download
Download Current LTS ZFS Vsan 151046 LTS template Ova template with OmniOS 151046 LTS/ up from ESXi 6.7
Compatible up from ESXi 6.7, 4GB RAM, 2cpu incl. VMware tools and vmxnet3 in dhcp mode with napp-it 23.dev (default tuning, Perl modules TLS mail and ESXi Soap installed) login: root, no pw, ssh disabled
Update: to update last LTS to newest 151046 stable at console (old LTS remains bootable via BE)
pkg unset-publisher omnios pkg unset-publisher extra.omnios pkg set-publisher -g https://pkg.omnios.org/r151046/core omnios pkg set-publisher -g https://pkg.omnios.org/r151046/extra extra.omnios pkg update pkg pkg update reboot
Manuals: Napp-In-One (All-In-One = ESXi + virtualized ZFS SAN/NAS in one server) read http://www.napp-it.org/doc/downloads/napp-in-one.pdf
Setup and update OmniOS and napp-it http://www.napp-it.org/doc/downloads/setup napp-it os.pdf
After Download: Please update OmniOS via pkg update and napp-it in menu About > Update please read attached readme.txt
Setup ESXi 6: Menu Create/Register VM > Deploy a virtual machine
Setup napp-In-One with our preconfigured ZFS appliance
- Verify that your mainboard + Bios + CPU supports vt-d (best: mainboard with Intel serverchipset and a Xeon)
- Set all onboard Sata ports to AHCI and enable vt-d in Bios settings
- Disable Active State Power Management (some SuperMicro boards, can cause problems) in Bios settings
- Insert a second SAS controller like a LSI 9207 (best) or 9211 or a IBM 1015 flashed to IT firmware
- Add a boot disk to Onbaord Sata (best a 40+ GB SSD),
- Use external Sata enclusures like a
http://www.raidon.com.tw/RAIDON2013/enweb/en product web/en intank/en iR2420-2s-s2.html
that allows hot mirror/clone/backup of bootdisks). As an option, you can
use clonezilla to clone bootdisks
Tip: Use the base VM for storage only and avoid a complex setup. Save the configured appliance as a template. On problems, you can simply import your storage VM and you are up again within minutes. Use VMs on ZFS foe all services with a special setup
- Install ESXi to your first Sata boot disk (option here is an
USB stick but I prefer combined installations on SSD with Sata
enclosures).
- Connect your ESXi box from a Windows machine via Browser https://ip of your box
- Install Vsphere to Windows (you can download via browser from your ESXi server) and connect your ESXi box via vsphere
- Enable pass-through within ESXi for your SAS controller,
- Import the downloaded napp-ot OVA template
- Bootup your VM, Enter a root pw, enter ifconfig to get IP
- Manage your appliance remotely via any web-browser (http://serverip:81)
setup a fixed IP, prefer vmxnet3 vnic (I had stability problems with e1000 on ESXi).
- Share NAS storage (use SMB for Windows compatible File sharing)
- Share SAN storage (use NFS), share this dataset also per SMB for easy access (snapshots, clone, backup)
- In ESXi settings, add shared NFS storage, connect the NFS SAN share.
- Create new VMs on this NFS datastore
If you reboot ESXi, be aware of some delay until these VM's are booted. (need to wait until the storage VM is up) but they connect/come up automatically with NFS when you enable autostart for these VM's
Optimal ZFS Pool layout for ESXi datastores
- With several VM's, you have a lot of concurrent small read and
writes. For a good performance with such a workload, you need good I/O
values. Best is to build a pool from mirrored vdevs (2way mirrors or 3
way mirrors for extra security/performance). Avoid Raid-Z configs. They
may have good sequential pereformance but I/O is the same like one disk
(all heads must be positioned on every read/write).
- When ESXi writes data to a NFS datastore it requests sync writes
for security reasons. The default setting of ZFS is to follow this and
to do sync writes only. This is a very secure default setting but can
lower performance (compared to normal writes) dramatically. Sometimes
regular writes are 100x faster than sync writes where each single write
must be done and commited immediatly until the next one can occur. (Very
heavy I/O with small data, bad fore every file-system).
You have now two options: 1. Ignore sync write demands (=disable sync property on your NFS shared dataset) with the effect of a dataloss on powerloss. 2. Add an extra ZIL-device to log all sync writes. They can then written to disk sequentially with full speed like normal writes
If
you add a ZIL, you must use one with high write values and low latency.
Usually SSD's are bad on this. Best are DRAM based ZIL drives like a ZeusRam or a DDRDrive. Sad to say, they are really expensive. But a good SSD like an Intel s3700 helps a lot.
- Add Pools build from ZFS Z1-3 vdevs for backup or if you need a SMB filer.
- Best: Use a mirror or Raid-Z of fast enterprise SSDs like Intel S3610 or S3700 for VMs without an Slog as they are fast even under continous load with powerloss protection. The Intel S730 is a cheaper option for SoHo/ Lab use.
Setup, see http://www.servethehome.com/omnios-napp-it-zfs-applianc-vmware-esxi-minutes/
read these manuals how to setup napp-it how to setup napp-In-One Manuals for Oracle Solaris 11 Express http://www.vmware.com/files/pdf/techpaper/VMware-NFS-Best-Practices-WP-EN-New.pdf
after Download you can optionally update - OmniOS to newest, see http://omnios.omniti.com/wiki.php/ - napp-it to newest (napp-it menu about - update)
For OmniOS/OI/Solaris Express 11: download Oracle manuals for Solaris Express 11, google them or check http://archive.today/snZaS (Express 11 - downloads are working, links refer to new Solaris 11)) Oracle Solaris Express 11 and its free fork OmniOS/OI are nearly identical beside encryption.
|