SpaceSys mechanics

Here you will find info about current development status.
Donators also have access to Downloads section.
Fri Feb 14, 2014 1:36 pm

  • We will update this thread day by day, (and as often as we can)

    While writing this down I found out that it is very hard to transfer all we have imagined in letters here. And also, only 1 person is writing this, so you are missing out on stuff I don’t think right now and the guy right next to me is implementing. I guess that I will iterate on this text, so please, those that read, please have understanding.

    In this thread we will talk about how SpaceSys will work in the future, the way things behave, and what improvements we will make to our UI.
    Some of things might change, and we may implement a completely new feature.

    Currently we have basic controls, no generated or scripted behaviors, navigation is crude, but we were concentrating on shell so far, to get control over Windows OS. Now as we are nearly finished with it, now it is time for us to open up doors to suggestions and discussions.
    We have lots of ideas, on how to make it work but we want more voices.

    To get back to the title,


    I will divide them to categories,



    System interactions


    The way SpaceSys interact with system is that we have complete control over Windows shell.

    Meaning that right now we have a way of implementing everything that you see in Windows environment we can implement in our UI. Meaning context menus, popup screens, progress bars, task bar, properties and other information that you can get from your system.

    Only thing that we cannot do is have other software screen output inside our 3D environments, like IE, or Mozilla, some of them (mostly open source software) can be ported and streamed on to the texture inside 3D and most importantly Chrome browser. With Chrome implemented you can have all of the internet content in 3D with VR, you get the picture :)
    Warning!
    What you do in SpaceSys is actually done on your file system. So once you get your hands on it, don’t delete something you might miss later, as you are doing it for real. That goes for all file operations!
    We have "spaces" as folders and a global space that serves as desktop - the environment itself. We are still keeping the files in the environment as a shortcut to the original file, as we are still need to make global space folder on pc a special folder similar to Windows Desktop folder.
    One other thing that we miss now is the drag and drop files from and to our UI. We will implement it later on when we cover all the file operation possibilities first.
    One of the more important treats of SpaceSys is that we are controlling OS, and OS does the hard work, at the same speed and efficiency as before. You are just controlling it through a live interface that we created.


    Space interactions


    The spaces are the analogy for folders in SpaceSys. You can have hundreds of open spaces in the world. We designed them to act exactly as you expect from a folder, you can do all operations inside and between them as you are used to in Windows explorer.

    Icons inside spaces are made of simple 3D objects that have a texture extracted from original icon images from Windows. We had lots of problems getting the best image quality and few of them look bad, that depends only on the icon image size and type (few of the lazy devs make only 32*32 pixels images). Also we had a lot of special cases icon types, shortcuts, etc… Few of them need a touch to be shown correctly and we will handle those later.

    Spaces can be moved around together with icons and you can enable or disable collision for them also spaces can also serve as an anchor to which you will be able to focus on and rotate around.

    We need to implement much more mechanics to spaces, like icons sorting, list view, detail view, etc… Also we need to have maximize / minimize buttons, close space button. For the last thing we need space scale, where you will be able to scale it down 1000 times the size together with icons and text, normally that will demand voice controls to find them later.
    Spaces will be able to anchor to one another, you will be able to make a wall of spaces, you will be able to transform any folder into space and to create spaces out of all subfolders, have them move in front of you without you moving, just rotating them and we will talk more about that later.

    Currently we disabled creating of spaces from anything you want, we gave you couple that cover mostly everything. Later when we clean out space generation you will be able to create on out of anything that you want. There is a bad side effect to that, when you click on a folder in global space, you open up windows explorer.
    These are just the basics of a “Space” there is a lot more to say and we will as the time comes.
    Some of the things we need to discuss are, current open files (from some space), multiple spaces selections (thing we miss in windows), spaces taskbars, etc...
    I must add that the current shape and size is still defined by us but later on you will be able to choose everything. Type, size, colors, fonts, and physics.

    Icon interactions


    Icons are shown in 2 ways now, one is spaces where the icon is a simple 3D object with a texture that we take out of windows icons and then there are the 3D icons. We created more than 1000 3D icons of most common software and used file types. Most users will have all of the icons in 3D in the world.

    Currently we are still working on user generation of 3D icons in the world. You can drag any icon from the space but it will stay 2D for now. We are thinking on creating a map, or using windows one to immediately transform 2D icons from space to 3D icons that we made, while you drag them to the world. We still don’t know if we will be able to do it automatically or the user will have to define each type by themselves. We will eventually do it automatically but mapping thousands of file types will take some time.

    Icons saving in the world is done now, you can sort out your world / desktop, and when you start again they will be there. (Sounds trivial I know) Each world will have its own set of icons, as user defines it. You will be able to propagate world settings later on. But for now each is for itself.
    Icons can be moved around the world. They have physics, and are affected with collision and other forces (that will come later).

    Group movement is there, you can select a whole lot of them and move them around.
    There is a whole lot of mechanics planned for icons and I think I could write for hours on, just about the possibilities that we opened up. Want do that, as it is 1 am. But I will add more to icons later on. For those that read this much, this is what I will write about later,
    Global icon generation to shapes, icon sorting by size, (physically shown) date, type,
    Icon dragging like with a thread, send icons to some area, or space without looking at it, selection exports, and stuff like that.


    Item movement


    Movement of items trough the world such as icons and spaces, later on various canvasses, objects like media player, context menus, taskbars and such.
    Right now we implemented basic controls for movement, when you click and hold with left mouse button you move the item but only on x-z, when you hold control or mouse button 4 you move it with y-z, (you can change the bindings) with the right mouse button click and hold you rotate the item, but only on x-z. Now that is what we use so far, it behaves ok, for mouse and keyboard input.

    We want to implement carry mode, where you select items and carry them with you, at the same distance from where you picked it up. That might be easier, also select and group items into a shape then move around. We need to play around with that and we will do it soon.
    As soon as you turn on Kinect or Leap motion, things get serious. Then you have an option to grab items and carry them around. That will behave a lot different than what you have right now, and since Kinect 1 is so imprecise we put the development of hand controls on hold until we get Kinect 2 in our hands.

    Leap Motion support is still on hold. We need one more dev in our team just for Leap Motion as we see now that the technology is getting widely accepted and easy to use. With leap you will be able to choose things and move them in 3D as you are moving invisible stuff in front of you. Hand and gesture controls are waiting for a big milestone that have to be there first and that is voice controls.

    What is also important about hand and gesture controls is that you use one hand for player movement. We will implement 3 types, one is free flight that you have now. One is absolute world control, where you drag the world around you, and the third will be a fixed mode, where you will be jumping around anchors in the world and focus on them. Other hand will be used for selections and operations. There will be 2 kinds, one is free selection where you will control mouse cursor and second will be fixed selection between existing items on the anchor point. If you come to a space and focus to it, you will just browse the items there with the hand, without mouse. Together with voice controls it will be intuitive, like you are working with real life items in the real world, and if you are in VR you will have that feeling.

    These are just the basics that I am talking about here. There will be joypad / joystick support, and even EEG headsets for thought control but we will talk about it later. Right now, we need more developers and more support to work on all of it.

    Context menus


    Currently when you want to call context menu on an item, we call and show Windows menus. In the same time our engine is on hold waiting for user response or command from the context menu. Creating our own context menus is the next big thing that we will implement.
    We will do it in 3D as expected in a 3DUI, we need to call Windows context menu on the selected items in the background, count how many items it has, what is the context of those items, and read out the text from the menu. With that info we dynamically create menu object in 3D that has all the items from Windows context menu, render the text to texture on the objects. ( We are still working on the look)

    With that we will become independent from windows menus, and set our UI free from suspension while waiting on user command. You will be able to have every context menu in SpaceSys, no matter on which items was it summoned it will be there.
    Creating such object that will be dynamically populated with shell items enables us to create couple of more items such as progress bars, properties also, popup windows and such things, all of that will be implemented shortly after creating our own 3D context menu.

    You can check out the first take on context menu here:
    https://www.facebook.com/photo.php?fbid ... =1&theater

    Since context menu becomes a world object and we have absolute control of its appearance we can play with how it looks. Later on we will enable modding of the same, so every user will be able to change the context menu as he wants. To choose from, colors, transparency, size, life duration, movement, creation, animation in one word - everything.

    We can create any kind of graphics for the context menu, and we will play a bit with that, but we will make it to resemble what Windows users are used to for now. Later on we might make different kind of menus.
    We will also integrate SpaceSys specific context menus that you will call upon items with ALT-right click or something similar, we still need to decide.
    That is enough about menus for now, I hope it makes sense to you.

    World behaviors


    Our worlds, or better said environments, will each have its own behaviors or even sets of behaviors that will be diametrically different to the next one. Enable different kind’s environment effects to immerse user into the environment.

    To better explain what I mean, you have seen the paradise island environment. It has real-time day/night transitions, we plan to connect those transitions to your clock on pc, so you will have the correct time of day. Also if it is a cloudy day your environment will be cloudy or raining. You want be able to have snow there, but there will be a glacier environment later on. We want force the user to use those possibilities that will be a choice.

    Transitions between environments will also become seamless in the future. Right now you have to shut down SpaceSys to load another world, later you will be able to load other world in a blink of an eye. Worlds serve as desktops, and each is different, as normal desktops at your pcs, so, lets say you have multiple users, all can have their own world setting or use different worlds. Settings will be easy to export and even modified. When you sort out icons and spaces in the world of your choice, you will be able to export those to another world. Or build a completely different layout.

    Worlds will have their own games, and when you wish to play one, you will be able to stop hide all the icons and spaces, or whatever you have open to just play a game.
    You will be able to build your own later on. From scratch using an editor, and we will talk about that more when the time comes.

    Multiuser


    Media Player


    Windows interactions


    Object properties


    Windows Popups


    Progress Bars
    User avatar
    Dalmat
     
    Posts: 135
    Joined: Thu Jul 04, 2013 1:25 pm

Tue Oct 21, 2014 11:35 am

  • One year after my first contact with this project, as I'm still expecting something new to happen around mouse and keyboard camera control...
    I am ready to put some significant amount of my free time into the pot without more involvement than developers need to consider when it comes to game addons like the one I support here on ESO.
    My experimentation would be very well served with some AutoHotkey and LUA combination. Currently, I miss some movement fine tuning to achieve any progress for smooth and comfortable moves.
    Would you consider crafting a LUA API over your current input layers so that we could experiment with original ideas about player motion.
    ivanwfr
    Donator
     
    Posts: 18
    Joined: Tue Nov 19, 2013 5:39 pm

Tue Oct 21, 2014 4:28 pm

  • Hello Ivan

    There have been updates to mouse and cam, some are to come in with the next update we are cooking now. We did not write much about it, (not even the parts we should, like manual!) :)
    But to say the truth, we don't see mouse and keyboard as primary input for spacesys. There are much better solutions out there for VR environments, like Leap Motion, which we will get to soon.

    Take a look what we are going to do next: https://www.youtube.com/watch?v=ZK5FRPwIWVE

    That is just a small example of what we will do, but what we want will be more natural and simpler.

    We have lots of mechanics implemented right now, we can use it all, but still have no time to concentrate on the functionality of those mechanics. We will continue to develop for mouse and keyboard, as leap will not be in every house soon, but you have to understand the way we plan for things. We implement mechanics, to be used with different kind of controllers that will come by the way.
    And we are such a small team right now that we cant stretch out to fine tuning mouse movement or player movement, or scripted behaviours, or create a LUA API for you guys that would actually work with it, i love the idea, and we would gladly do it, if we had the time.

    Our main focus now is on getting the application windows inside spacesys and transferring to new engine, implementing suppoert for oculus rift DK2, enabling the editor, and quite a few more things.

    We are growing fast right now, getting investments and launching a kickstart campaign, now only thing i can do is to promise that you will get the mechanics that you think of, as we share the vision, we want those in too.

    BTW, i hate that i have to control the player speed now trough a set parameter, or set it to one speed, player speed needs to be relative to objects, but it isn't that simple, objects have to have scale relative to world, player speed and maneuverability depends on all those parameters, which are currently not implemented yet.

    Work work work..

    Update first, then new engine and new oculus, on the top of the list is :
    Player mechanics, (more movement options)
    Space mechanics - world item mechanics -Scripted behaviors of world objects
    Leap motion - kinect2 - voice controls

    Those are going in together.
    User avatar
    Dalmat
     
    Posts: 135
    Joined: Thu Jul 04, 2013 1:25 pm

Tue Oct 21, 2014 6:33 pm

  • Thanks Dalmat for this quick reply. I certainly understand the priorities constraints and the fact that some have to be set as the way to go. What I can add is about how I'm worried by the fact I tend to check what's new only a few times a year. I would prefer having the urge to give it a try every few weeks and send a feedback with my opinion about the progress.

    The reason why has to do with the fact I can't build a working environment I could use everyday. If I were persistent enough to cope with the current lack of interaction through a keyboard and a mouse, the most part of my interaction would be spent fighting with my mouse and the spamming menus that pop up when you don't expect them (due to my fancy AutoHotkey mouse handlers). When you have some serious work to do, your mind has to be available to handle real issues.

    My main missing feature with the current input methods are those that any MMO had to solve in their own way. If the display handler had a descent solution, it could be I would use it as my default environment - which BTW is currently reduced to about 10 desktop icons and huge tray menu ... as every time windows destroyed my tidy tuned desktop organisation, I could not find the right person to kill... With my current solution, this will never happen again but I'm stuck with the worse possible hierarchical menu solution.

    My situation could be condensed to a lack of "moving mode" // "OS mode" distinct interactions.
    Using a keyboard and a mouse are the worse thinkable means. If we have them it's mostly because they both have been a magical solution, ... once. Truth is, if we were using similar tools to drive our cars, it would be like having your brakes pedal serving a different purpose depending on the circumstances ... there would be meat everywhere.

    It could also be that some kind of sensible transition from our ordinary mouse to a more effective 3D input means would be a good move. Most casual user could give it a try without the mandatory step involving some geek's device.

    My personal needs for this are only about how I could be productive in this environment before I decide to live with an Occulus on my head 12 hours a day, for a few year, before we get some Hollywood patented gesturing at hand.

    A simple 3D navigation--OS interaction modes selector would probably make me happy...

    Now, I'm ready to add some buzz on KickStarter pages...
    ivanwfr
    Donator
     
    Posts: 18
    Joined: Tue Nov 19, 2013 5:39 pm

Wed Oct 22, 2014 11:37 am

  • Well, I've read the Kickstarter page and I must say it conveys a good deal of what has to be said from my point of view. Goals are ambitious enough to draw the attention of most awake people still expecting to get something new and useful from the digital world. I would even say that it looks much better than I expected from a team of geeks. Most of the time, I'm rather sarcastic about this kind of project because I can't remember how many times I had to admit that what seemed to me as being the next everyday's life changing event just vanished with the wind before my eyes.
    One of two things may happen as I see it.
    * Either facebook acquires the few remaining years of your life, just like Apple aquired the two Fingerworks Multitouch guys.
    * Or, I and a few sick people like you and me are going to fill those pages for a few years before some wind gust shows me it happened again.

    I hope I'm wrong, as I would hate to see something like this happen . Just another money-can-silently-destroy-anything demo. But I'm always ready for a good surprise that would be to see something great really happening for a change.

    Back to my one and only concern, all this said, written and read, I still think that depending on any kind of 3D input technology in its early stage will prove being a chore that will keep most casual users away from this adventure. Just look at how most are satisfied with a hundred years old QWERTY layout and a 3 buttons mouse. If you can bring these guys to click on your things with their usual tools, they wont have a choice but to try and show their neighbor how good they are. Give them achievements and perks and fitbit may have a new buzz challenger.
    ivanwfr
    Donator
     
    Posts: 18
    Joined: Tue Nov 19, 2013 5:39 pm

Wed Oct 22, 2014 2:05 pm

  • I remember Bumptop, i used to play around with it from the beginning of their development. Some of the features that we will implement later, ones with icon sorting and interaction will look similar. But our primary goal is to remain independent.

    The mouse and keyboard will all ways be there, don't worry, we will make use of it, but the mechanics that we create now will be used by a team of 5 geeks working on that input method only, not the 4 guys that sit in the office right now.

    And we have been with Microsoft, and some other guys in US, talking about our stuff, they brought experts to talk with us, they gave us an offer, we turned it down :)
    For good or worse, we are doing it alone, with Kickstarter, and some angels in the background.

    Later, well, who knows what might happen. But when it happens, we will already create all we hoped for. Maybe we will not be the only ones on the market but we will be there.

    People do not want the change, but the change is coming so fast and so hard that no one will have any choice. And you know what i am talking about.

    Take a look at magic leap. ;)
    User avatar
    Dalmat
     
    Posts: 135
    Joined: Thu Jul 04, 2013 1:25 pm

Wed Oct 22, 2014 5:42 pm

  • A good thing you had to steer away from the easy path into one of these black holes this early. Still, I can't see how those suckers' greed would leave you alone as soon as you gain some audience. Anyway, I have nothing meaningful to add on the topic atm.

    Bumptop and FingerWorks Multitouch were related past events. SpaceSys has already open a new chapter in the history of computer interaction. Patents and experience are the only domains on which a "normal" company will be able to keep money crafters out of your hair. But when they go squinting at you, they will become irresistibly generous, poor you. They are waiting, I'm sure of it ;)

    And yes, I watched this whole Leap Motion Controller review and, even if these guys "scientific approach" is questionable to say the least, what I get out of their spontaneously candid experimentation is how any imperfection becomes a main feature when it comes to decide whether it's good or bad.
    - My own conclusion would also be against jumping on the learning curve right now with this device. What we see here is our future, no doubt, but it's not yet a tool I would work with at this stage.
    - The mouse is still the best current way to harness the digital extension, no matter how hard it can be criticized. Even a HOTAS with all its axes and switches plugged into DXInput interface can't compete with a mouse when it comes to some very well mastered moves.

    Back to my main horse now, as usual: Most of your replies to my argumentation about this fucking mouse, shows you consider it as a second class citizen when compared to the new means that will be there to let us fly and swim in our future 3D world, and I agree, the sooner, the better. But we are not there yet and banishing the mouse so soon could scare away a lot of users yet ready to try and support the project.

    I put some of my money in Star Citizen ($+ there!), just to be part of this thing. As a sim addict, I'm not happy AT ALL with the crowd-concerned aims but it's so cool that I couldn't just not do it. What you're up to will have the same effect on most life-aware electrified people. Still, you can't ask most of them to go and get something to replace their mouse, pad and keyboard before they can say something. Anyway, I know for certain that 100% of my human entourage will be amazed when I'll show them what their future will be, but they will never think they could be using it back home because they miss this weird third party tool it depends on... So sad!
    ivanwfr
    Donator
     
    Posts: 18
    Joined: Tue Nov 19, 2013 5:39 pm

Wed Oct 22, 2014 9:09 pm

  • Well, it basically comes down to, no one can resist if they want something. If i don't join the dark side i will be assimilated :)
    Which is not all bad, if i get the chance to do it right, on the level i want this to happen.

    I do consider mouse & Keyboard as a valid input method, which i am using instead of hotas for sims, i am an old fashioned guy, which comes with age i think :) (37) but i do have to develop for things to come, m&k do hold back SpaceSys, and to make it work as we would like or wish we will have to put in a lot of effort. It is like developing for Oculus Rift, already old tech..

    M&k is still the best solution for 2D interaction, but to truly go into 3D we need a 3D controller.
    Please take a look at this video also, it is very intriguing. https://www.youtube.com/watch?v=-W18BylZk6o

    And by the way, i am an ED guy :) SC is great, if i find the time for both, i will see you in space. :)
    User avatar
    Dalmat
     
    Posts: 135
    Joined: Thu Jul 04, 2013 1:25 pm

Thu Oct 23, 2014 12:11 am

  • It's ok for the true and only goal with no compromise whatsoever but when it comes to age having something to do with old fashioned, I am a proof of concept for the other way around. I'm 63 and I jumped from basic to C, to Unix, to Smalltalk back in the 80s ... to Java and, out of curiosity only, Squeak/Croquet/Open Cobalt.

    But all I'm saying only reflects my currently narrow perception of the project and I can see it all will be taken care of as long as you have an eye on those points. Even Apple had to struggle with multi-touch interaction, meaning there can't be a simple way to go beyond.

    I'm sure we'll have a chance to see some of our avatars in one of those virtual worlds - in some coop mode - PVP is not my thing as I miss a roaring ego.
    ivanwfr
    Donator
     
    Posts: 18
    Joined: Tue Nov 19, 2013 5:39 pm

Thu Oct 23, 2014 11:20 am

  • PVP in those games usually comes down to griefing, for PVP i play battlefield, and in my team i am the youngest.
    You sir deserve a medal, if there were some to give, not just for your age but for your advanced thinking. I hardly can find young people that understand spacesys and what it might become, and here is a person that lives in the future, held back only with flesh on the bones. I salute you!
    In the 80s computers were banned in Croatia because of the communists, we had to smuggle knowledge, that why it made it interesting for us.
    And because of that and Elite on ZX spectrum, today we have this conversation. :)

    Your perception is not narrow, you are talking about the things we are putting aside because of the sheer amount of work we need to do, and we have a small problem, living more tomorrow than today does not help with deciding what to do first, or where to concentrate our strengths.

    Talking with users help, (there are too few conversations like this) but now, we are prioritizing controls, with m&k and otherwise and it will be among the first things we start working on.
    That is a promise.
    We talked about LUA here, we could do something in the future, we are adding the thought on our list, and we will see. We need to do a lot of restructuring in the near future so we will think how to enable modding action controller which handles all world interactions.
    User avatar
    Dalmat
     
    Posts: 135
    Joined: Thu Jul 04, 2013 1:25 pm

Next


Return to SpaceSys Development




Information
  • Who is online
  • Users browsing this forum: No registered users and 1 guest
cron