Prototype of city square that creates music when city-goers run around the moving circular "tracks" of a giant turntable and camera tracking turns their arm gestures into music beats per audio track (image by Danish architect at our MAB workshop in 2012, Aarhus).
Actually this is more a plea.
Consider this imagined scenario. You are an academic having coffee with a colleague. They do interaction “design-y” stuff and you ask them what they are working on. When they give you a broad overview of the technology and interaction, you might say”Well, that is all well and good but I need to research practical and useful things.” If they know what your focus (
tunnel vision) is on, chances are they will then explain how a modification or redirection of the interaction design they were just describing will allow you and your content to do X. “Oh, that I can use” you might say.
Just hold on a minute here. They described an application, tool or service with more generic potential, and then had to use their creative imagination that you didn’t bother tapping into, to show how it could work for you. After you had poured mild scorn on their research. Seems to me they had the brainpower to
a. come up with a generically useful, hopefully transferable idea, concept, tool..
b. be able to summarize your research
c. understand how this new idea, concept or tool could apply to your context in a way that you could understand, AND
d. not be offended that you still didn’t grasp the exemplar they provided you was only a subset of what they had invented to start with.
I am not sure step d would happen though. And I wouldn’t blame the interaction designer if they didn’t have coffee with you again.
Hello Curtin students, if you can do a masters course project (or you are final year undergraduate) you might also be able to build on one of these ideas:
Corbin is my summer intern, looking at
1. Kinect-Minecraft v2: a software framework for non-programmers to create their own gestures for Minecraft interaction: https://www.youtube.com/watch?v=09tc3nLgx9w
See also: https://maker.library.curtin.edu.au/2016/08/02/creating-a-gui-for-kinect-v-2/
2 Kinect-Unity pointer software:
3. Point clouds with a Head Mounted Display (HMD) /Unreal. Status: exploratory.
See also CAA2017 slides from Damien Vurpillot: https://www.academia.edu/30171751/Exploring_massive_point_clouds_how_to_make_the_most_out_of_available_digital_material
4. Corbin will narrow down the above into one main investigation. Evaluate: sharing virtual experiences across different displays (cylindrical versus HMD): to uncover similar papers with a collaborative learning focus. Ideally there will be a comparison of Unity versus Unreal.
A journal asked that I respond to a paper that briefly mentions the above. Notes to self include these general questions that I seldom find answers to in virtual heritage papers and not mentioned in my response (the journal has a strict word limit):
- Interpretation: It is very hard to extrapolate from VH papers how various interpretations are fostered.
- Beginnings: Where do you place a visitor in a virtual site?
- Dynamic alterity: How should or could they navigate time, space and interpretation?
- Art Versus Scientific Imagination: How should they separate artistic from current reality from interpreted virtuality? What if the artistry is impressive but speculative?
- Projects: Where can the projects (that apparently relate to the questions posed in the text), be experienced or otherwise accessed? How will they be preserved?
- Interactive Navigation: How do we navigate time, space, interpretation, and task/goal?
- Authenticity, accuracy and artistry: How does one balance all three?
Time for a quick update on recent tech offerings
The following was a successful grant, funded by the Curtin Institute of Computation.
Title: Leveraging Low-Cost and Free Linked Open Data and Hybrid GIS/3D For Cultural Heritage Visualisation (6 months)
The program/research plan:
The two ECRs with the help of the two Curtin Professors will investigate the use of an application, possibly the Pelagios Framework (http://commons.pelagios.org/), an online portal that can combine maps, charts, documents, pictures and dynamic data, to create interactive visualisations and predictive cartographic analysis tools.
Figure 1: Pelagios
This pilot study will explore whether the application can accept, display and dynamically link to 3D models and their subcomponents, using GIS Data so that maps and 3D models can be displayed and interacted with online. This specific application theoretically accepts simple 3D stl models but three.js and web3D models have not been investigated. Existing related examples: see http://www.usc.edu/dept/LAS/arc/mayagis.html
The two ECRS will derive a 3D model with GIS related data and design an online Pelagios Commons framework (or similar) for viewing a 3D model of a heritage site, preferably in Australia, that controls place elements in a side-located text document or an online map or chart and vice versa.
http://pleiades.stoa.org/ shows some of the possibilities of Linked Open Data, but not how 3D can interact with a LOD GIS platform.
Proposed engagement of external and community groups
- Firstly, we will collaborate with the following non-CIC staff at Curtin to develop the Curtin University workshop.
- Secondly we will invite members to test the prototype and provide feedback and potentially collaboration and grant opportunities.
- We will test the prototype with archaeologists, heritage specialists or architects in another Australian city. The longer-term aim is to engage them in applying for a linkage to design a more permanent and larger collection and online portal for a more highly featured, user-friendly and robust design.
For an interesting potentially related interface please see http://www.impa.br/opencms/en/