Porting Blender's mouse / key bindings to Interface


#1

While experimenting with the Controller APIs I was finally able to override all important keys from JavaScript.

Basically everything except for OS-level combinations and three stubborn symbols on the numeric keypad.

This means it’s possible to clear the slate wholemeal (ie: neutralize all existing keyboard landmines) and then leverage altogether different bindings.

What comes to mind first is trying Blender’s user input scheme in context. Maybe this can offer a more precise and efficient way to achieve micro-edits?

For example, to translate an Entity exactly -1 unit on the X axis you might: right-click it, type G, X, -1 and hit ENTER. No property dialogs are needed for that.

The G thing is a legacy mnemonic that in Blender means Grab/Move, so the above could read: Select that entity, Grab it and move it along the X-axis by 1 inverse unit; apply.

(I think it’s worth noting how a story is being conveyed there to the computer – and how much wouldn’t have to change if/when switching to a voice-driven editing experience later…)

Anyway, would anyone be interested in helping to identify a useful working set of Blender crossover operations?

From there I could try connecting some of them to glue code and we could see what works.

Mostly I’m thinking of this as rewiring (ie: different ways to access same features), but it might be worth doing some extra math to bring in stuff like Blender’s predictable, numeric keypad Camera controls. Or even Blender’s proportional editing – which could prove very powerful if adapted to work on sets of Entities instead of vertices.


#2

I like this idea but I’m kinda Blender biased. G,R,S,J,P and middle mouse pan orbit grab rotate scale join and separate.
Whats nice about the scriptable controls is we can be totally self serving with a blender based control set up. Without spoiling the I want it to work just like Mario cart 64 camp


#3

Awesome – G,R,S make immediate sense to me.

Could you elaborate a little on how/where you would think to employ J and P? And did you have something in mind for how they would map to existing Interface features?

Otherwise I don’t think the world is ready yet for in-Interface FBX editing so we’ll have to figure something out at the Entity level (Entity.parentID looks promising as way to approximate the join effect).


#4

I was thinking with the new parenting capability things in world maybe doors to frames?


#5

Shift A - add entity
Shift D - duplicate entity
maybe?
but great idea to try g x -1 ,
rotate r - x,y or z deg would be very useful


#6

… tinkered with the new parenting capability and it seems to be geared towards the “Mount Olympus” scenario.

And also seems to violate Newton’s First Law of Physics (tearing a new one in the virtual space-time continuum).

Could be worse; will see if I can work around the dark matter effects.

Yeah totally – and it I’m thinking R-X-X will be handy too. R-X for predictable rotations on the global X axis and R-X-X for a predictable rotations on the Entity local X axis!

Are you guys both on Windows?


#7

What I hope for is a little drop down menu (scripts can add menu items to the interface) that gives you a choice: Blender, SL, Maya, Custom. That then sets the behaviors. Custom would bring up a window of actions and provide additional keycode selections that you associate to that action.


#8

Cool. I’ve been keeping an eye out for ways to generalize in those directions.

I’m sorta sensing two different aspects here. First, the desire for a memorable, mature and justifiable keyboard instruction set. And (almost separately?) the desire to leverage unused keys in creative ways. Does that sound approximately right?

In any event, at some point it seems necessary to wrangle defaultScripts for users to actually use these kinds of customizations without them becoming undone non-obviously. What do you think about using a standalone batch file that first ensured the user’s wishes for startup scripts were configured in their .ini file and then simply proceeded to launch Interface.exe as usual?


#9

update: so far this approach works awesomely; currently testing initial solutions for grabbing, rotation, axial bias, dupes and deletes! more to come…


#10

Just brainstorming out loud here.

While working on defining/managing key bindings in JavaScript, I realized there isn’t a great way of doing that cross-platform (or even on the same platform for a different keyboard).

And to make things more fun, Qt hard-codes an interpretation of control and meta keys before application code receives the event – from Qt5’s docs:

And for VR use cases, expecting a user to hit a non-trivial key combo is already like asking them to pin multiple tails on a donkey in the correct order across 100+ exact locations – all while digitally-blindfolded by an HMD. So, probably, conventional keyboard use will become something developer-types continue to do frequently – and typical VR users not so much.

With both virtual shortcuts and physical key sensors in mind, I’m thinking about a hybrid browser/DOM-like scheme. It would provide access (where possible) to the physical key – while still entertaining the conventional virtual interpretation, remaining compatible in degrees with standard web scripts.

For example:

  • Mac keyboard (regardless of operating system):

  • ⌥ Opt | "option" | event.mac.optKey | event.altKey

  • ⌘ Cmd | "command" | event.mac.cmdKey | event.ctrlKey

  • ⇧ Shift | "shift" | event.mac.shiftKey | event.shiftKey

  • Windows keyboard (regardless of operating system):

  • ⎇ Alt | "alt" | event.win.altKey | event.altKey

  • ⎈ Ctrl | "control" | event.win.ctrlKey | event.ctrlKey

  • ⇧ Shift | "shift" | event.win.shiftKey | event.shiftKey

  • Physically-based keys

  • Numpad 0 | "numpad0" | event.keypad.num0

Note: this ponders a lower-level representation (than say Blender bindings); for managing virtual shortcuts via scripting I’m thinking of going with something colloquial like KeyboardJS’s API.