Has anyone tried using Asset Transfer Protocol with a Network Attached Storage device? I have no experience using ATP as of right now, and wanted a dedicated build if leveraging NAS is possible.
Been using it since it rolled out (lost track of how long that’s been). It works well, but, I would suggest waiting a bit until some updates are made.
In so far as its performance… using it from a server on a fat data pipe – it’s hands down better than HTTP. Running it from home where your upload speed may be a small fraction of down… you get what you get.
Well played sir, thank you. When it comes time to integrate, I’ll come knocking.
u said it lol
i gots nothinnn
We will need to add the simple feature to allow you to specify a custom assets directory for this to be possible, but otherwise nothing should stop you from using whichever storage you want, assuming you can map the NAS to what looks like a local drive to the application.
The best way to get assets to the asset-server is just to drop them in the assets directory. The upload process for multiple assets at a time from the client is currently not very robust. The assets directory is in the application data folder under
assignment-client/assets. When the asset-server launches it will check this directory for new files and add them to its in-memory list of available assets. It should output in the log when it runs the new ATP filenames for assets it has discovered.
I’d like to make ATP less of a frustration for you. What are the biggest pain points you have with it right now?
Beyond all that… it needs a sha256 -> human readable name translation “database” as well. That’s the true block as it doesn’t take long to end up with a few hundred MB of WTH is this sha256.fbx files…
But a NAS is not improving speed for the assets.
The assets need to be loaded from the NAS, and your domain still need to send it to the client.
If you have a small uplink bandwidth it’s not improving speed.
A NAS can be intressting if you have a VPS and some dropbox system that allows you to connect the NAS to your VPS as local Disk. but still the domain need to send it. It’s only cheap way to expand storage capicity and have a betetr backup system.
Some hosters have also some dropbox system so speed must be pretty fast because in same datacenter.
Smiles when i tried to migrate all my dropbox assets to it using the atp migrate thing
it flatly refused to migrate any dropbox models maybe the aforementioned caching thing?
I would like to get everything on the atp if possible
secondly when i manually moved them there they seemed to give up attempting to load
smokey stood in my domain for 8 hours and couldn’t see any of them.
she doesn’t have the fastest net in the world, but i figured once she has them cached she has them cached so that wont matter
can it be made to keep trying until it succeeds?
i went on to stick the assets i had migrated up on github thinking this script thoys made could swap the links over to there but that doesn’t seem to work now…
thinking from a perspective that the easier it is to move assets around to different servers on mass the easier it is for the en user
Yes, definitely. We have discussed a “pointer” style system where you would be able to name assets and have that point to a specific versioned asset.
It will require a bit of design and engineering. We don’t have a current timeframe for the addition of this feature, but I’d like to acknowledge that we know this will be very important to increase the usability of ATP.
In the interim, the easiest way to get the ATP url to an asset is to drag the original on the Interface window to trigger an upload of that asset. Uncheck “add to scene” and start the upload. Even if that asset was already present with your asset-server, the upload should be relatively quick on your local machine and you will get a dialog with the ATP url for that asset.
I have an easier way than that, but, yes
But classically, assets have always been spread key based. It is the inventory database, that among other things, performs the translation from “Eye of Newt” to “244f23d4658b4f17b7bcb6ca1518154a…b36c74b5011648bd9086c9901fc1a0ea.fbx”.
The ATP migration tool was really not meant to see the light of day. It’s often unsuccessful and doesn’t give you much of an idea as to where it is in the process, until it fails. We used it when ATP was introduced to attempt to migrate a couple of our domains.
I may be able to do a pass on the tool to make it work a little better - Until then, while painful, the upload of each asset individually (download from dropbox locally, drag and drop onto the interface, uncheck “add to scene” and then replace the model URL with the ATP url) is a way to move over to ATP that is likely to be more successful.
The other option would be to manually drop all of your assets in the app data directory I reference above and then grab the resulting ATP URLs from the log.
As we speak I’m testing against the place ‘free’ (which I think is yours) to see why the download of ATP assets is occasionally failing. Even with a slow network all ATP downloads should eventually succeed, and once the download has succeeded once if the asset is cached it should not be re-downloaded the next time. Hopefully with that problem solved you’ll be able to trust that visitors to your domain will always see your ATP assets.
Absolutely - but if you have people jump - prematurely - on the ATP bandwagon then they end up with an unmanageable mess and will fetch fire and pitchforks. My only point is don’t jump on ATP without understanding what you’re getting into - not that an asset should have a human readable name or anything else. I love the essential automatic dupe prevention nature of SHA256 names.
I quite agree.
ATP seems pretty clear for me now how it works. also easy to cleanup, when you reupload asset because fix. Mabey because i used opensim it’s more logic to work with UUID’s it’s only harder to find it back if you not write down the ATP with upload.
Try it when you have 50K plus then comment again.
Also - lets talk about that for some of us is so obvious that we don’t bother to talk about it.
I run my stack from a data center on Linux, it’s connected to a whopping fast high speed set of data pipes and it’s capable of easily reaching the limits of its 100 megabit ethernet port in bandwidth. In my case there’s the primary bottleneck - file I/O is not pushed to limiting levels before hitting ceiling on port speed. Now - what if instead of being on a 100 megabit port it was on a gigabit port? Well - with the data pipes to net available, in theory, it could easily push gigabit/second, but, there you might start to see file I/O and other things come into play. That’s the world some of us live in. Now - lets say I wanted to stop paying for that and run it from home. What’s that look like?
My specific case… 50 megabits incoming 5 megabits outgoing – if I upgraded to fastest they have for home use I’d be at 350 megbits incoming 35 megabits outgoing. Now lets stay at 50/5 as I have no desire to increase my monthly inet bill 6X.
I’ve got 5 megabits/second to work with. Ignoring anything but ATP… how rapidly will a visitor to my space with the same speeds saturate my uplink pipe? Instantly. But - it also has to serve audio and avatar state and… It’s not a sour grapes on HiFi statement to say - at current levels it’s a bit of a longshot that running a domain from home with any form of capacity or responsiveness can be maintained. X Years from now that may well be different, but, at least in US, the habit of having asymmetrical inbound/outbound speeds seems reasonably entrenched, both for technical and “political” reasons.
These are great points @OmegaHeron.
The long term design for ATP should accommodate home users with a small upload pipe by allowing you to optionally ask for help serving your assets from other asset-servers. As a client I could saturate my download pipe by requesting 10 blocks of the asset I need from 10 different asset-servers with an available upload capacity at that time of 10Mbps.
Until then, a home user with the asset-server on their local machine will definitely be bound by the capacity of their upload pipe.