Saturday 28 November 2015

PhD finished - lessons learned

I defended successfully my PhD on 20th of November. As you can guess, it was a relief. Almost 6 years working on a book that synthesises all the work done... at that moment, you think how many experiments were not successfully published but were good ideas yet.

That was a very long journey.

I could write a long text describing the positive and negative things of this long journey - you know reflecting on my journey... believed me... I have done it several times... so I will summarise them in just three points:

1. Keep working and just do the best you can... and even when you do that, be ready for criticism. Think that when you read a published paper from other PhD student:

  • the promoter agreed with the content. In many cases, even co-promoters and other colleagues from the same and other institutions have agreed with the content;
  • three reviewers agreed that the content is worth to be published;
  • editors or program chairs have given the ok...
       So, if you think this was going to be easy, give up and change your career... 

2. probably one of the biggest lessons learned from your PhD will be: learning how to deal with disagreement. Good ideas are not enough, the idea and the results will require the consensus of at least 4 people with different background, expertise and interests. 

3. don't set up wrong goals (maybe it's a too hard statement)... the MIT has a limited number of places ;). Don't get me wrong, there are people who need this sort of goals. However, the PhD is about the journey and it will take you between 3 and 5 years. Your most important goal is to improve. It's an individual goal. Remember that each publication will be reviewed by 4 people at least (see point 2 ;)). Do you think that if you finish your thesis with 5 papers and your thesis that probably will involve and average of 15 and 20 people who thought that your work is relevant for the field, aren't you a suitable candidate for whatever institution around the world? What if, in addition, you won best paper awards, participated organising workshops, reviewed papers and created an interesting network of contacts?

Don't you think that it will keep yourself busy for 3-5 years? Good work will lead you to nice, good and wonderful learning experiences and do not worry about the next step. If you nailed your PhD, many interesting groups will want and need you.

Summarising: keep up the good work fellows!

I must admit that I struggled with the three lessons. But don't get them wrong... it doesn't mean that you need to give up on your beliefs... on the contrary... believe on them hardly.... they will keep you alive...

Commit to your ideas! Keep your mind open and be aware that science is team work more than you think! 

In my case, I moved back to the private sector in August as you can check on my LinkedIn and I am quite happy with the decision. Btw, we are looking for a Senior Big Data Engineer, so feel free to spread the word. I am sure that in this company, we know where we start but we don't know where we are going to end because we are going to face very interesting challenges quite soon. I would love to partner efforts and collaborate with a Big Data Engineer. 

Now thinking what is the best platform to blog... probably Medium but thinking also to blog on LinkedIn... but this is my last blog post here: new topics and new blog posts are about to come! (somewhere else ;)) 

I also share the slides of my PhD presentation if you want to check them out!

Ps: As I said, this PhD was a team effort. And I must thank especially to my promoters Katrien Verbert and Erik Duval who is dealing a very particular fight! Good luck!




Wednesday 17 September 2014

The weSPOT meeting is over... now back home!

The weSPOT meeting is over! Nice project, nice people and splendid food in a nice city: Graz :)

Six deliverables are in the oven and close to see the light of the real world ;).

KU Leuven is in charge of D3.3: User management and badges system. And I personally like the content.

The deliverable starts with the explanation of the weSPOT OAuth provider. We had a problem in weSPOT. Our log in system relies on OAuth providers... why? Simple... we wanted to simplify the process of users enrolling into our inquiry environment. So users could join our system using their Facebook and Google accounts. However, kids bellow thirteen shouldn't have the accounts... so we needed to provide them another mechanism to sign up in our system.

weSPOT created its own OAuth provider in the cloud. The provider is hosted in Google App Engine.

If you really want to know more about it... just check it out in our site! And if you want to deploy yours... don't hesitate to contact us!

We have also created several badges to engage users in the use of the system. Here you can see a screenshot.
Besides the interface and the rules, we also created a some sort of Open Badges API that provides the basic functionality to create, award and store the badges. If you consider it convenient, you can check the API. The source code is available, just in case you want to deploy your own instance ;).

Almost forgot! We also implement and will offer recommendation services. But this will have to wait... we don't still have them in the production version... be patient! :)

Sunday 17 August 2014

weSPOT attends #OBIE2014 and #icwl2014

We did it again! weSPOT attended two great events such as ICWL and OBIE (The 1st International Workshop on Open Badges in Education).

It was a good opportunity to show all our designs, thoughts and experiences with Open Badges.

We had discussions about questions such as:

How many badges should we design for a course?

What are the first actions that someone should take when s/he decides to deploy badges in her/his course?

Shall badges be a representation of competence/skills?

Moreover, Nate Otto presented a very interesting project that it is worth to take a look. This project discusses about design principles for badges. 

I would like to highlight also one of the papers presented in the main conference: Open Badges: Challenges and Opportunities . Authors reflect about the actual possibilities, limitations and future work in the field. They were also some of the organizers of the Workshop.

We have still many open questions since the concept of badges and its acceptance is still a challenge.

We also discussed about the perception that students have when we introduced badges. I explained our experience when some students were not very positive towards badges since, from their point of view, they consider Learning as a very serious activity. We also addressed the problem in an informal way explaining them that we look at them as goal representations.

One of the main conclusions that the badges acceptance require dialogue with all the different stakeholders.

Badges as we talk in our papers have different aspects to consider. They are considered game elements, but also represent goals and they have social recognition. We are interested in these two last elements. However, we need to deal with the possible perception of the first element.

Many experiments to deploy and fun experiences to enjoy... and soon also in weSPOT :).

Sunday 30 March 2014

weSPOT attends LAK and OLA meeting - Is learning Analytics just a big hype?

Finally, we just get to the end of an exhausting but amazing week! Many meetings, talks and the annoying jet lag!

 LAK and OLA has been an amazing opportunity to attend to many presentations and to have exhilarating conversations with many people... so let's try to summarize the experience a bit.

Personally, I had the opportunity to meet some folks from OUNL (Maren and Hendrick), that are working around Learning Analytics. They are  involved in the LACE project. One of the goals of LACE is to build a framework of quality indicators for learning analytics. They are in the brainstorming phase trying to collect those indicators, and weSPOT has contributed.

Hendrick is also in the process of collecting real data from their institutional LMS, what can bring them the opportunity to run different tests on such amazing dataset.

We had the opportunity to talk with folks from the Apereo foundation  as well, concretely with Alan Berg and Sandeep Jayaprakash. They are contributing to the OAAI initiative and trying to define the flow of information in a Learning Analytics Systems.

Both collaborations can be an opportunity to contribute with the expertise of KU Leuven: Learning Dashboards. Therefore, Sven will have soon the opportunity to do more of his cool stuff with more and different data.

Along the weekend, we participated in the the Open Learning Analytics meeting, somehow we are trying to define the roadmap for SOLAR and exploring how we can collaborate with each other, funding possibilities, etc.

But... what about the conference itself?

 First, I will share the proceedings and my slides (I have to admit though that the last slide was the one who got more attention :-P). Also it is worth mentioning the amazing work that some of the attendees did reporting on their own blogs, however, I think that Doug Clow and Stian Håklev were clearly the best.
 
(Advertising spot: Btw, Stian is about to finish his PhD as well as I (I hope ;)) and we are exploring the possibilities for "the next step"... yep... that step that every PhD student is scare of... what to do next???? Anyway... don't hesitate to contact us if you are looking for some collaboration! ;))

  But again... what is my opinion about the conference?

  Many cool things and impressive analyses of the data... but I have to mention two down sides:
  • There were no dashboard presentations this year.
  • I have the feeling that many of the learning analytics folks forget about the HCI aspects of learning analytics.
  Several times I have heard from many people the sentence of: "We are trying to solve a problem that may be does not exist, but addressing this nonexistent problem, we generate another problem that maybe we can address".

  But... what is going on? Why does people have this question in mind?

  They do cool stuff... but the adoption of their artifacts is slow or nonexistent.  Therefore... they come up with apparently a logical conclusion: I am doing something cool and useful, but if people don't use it... it is because they may not have the need.

  At one of the dinners, Abelardo Pardo was explaining one of his workshops experiences in Australia and New Zealand with academics... and one of the teachers asked just at the beginning of the workshop: what is wrong with the traditional way of lecturing?

  But we know that alternative ways of teaching influence certain aspects such as motivation, attention and novelty, and they may influence positively to the individual and social learning process.

  Is there a need? Ok... probably if we go to the Aristotle's definition of 'necessity'... there is no need... but we can get better on what we are doing and this first assumption is what should trigger the use of alternative methods... the goal of becoming better... although I have to admit that probably not everybody pursue this goal.

  However, HCI does quite a lot of research in this field of technology adoption. What are the common problems to attract user attention, how we can capture the user attention, how we can engage users in this process... and so on... and this kind of studies is something that I miss in LAK...

  We'll see if all these talks, events and experiences end up in some fruitful collaboration that it is what really matters... in the end... the knowledge acquisition is already done... so let's try to transfer some knowledge now...

Ah! and don't forget to watch the TED Talk about our view in Open Learning Analytics!


Thursday 23 May 2013

Reveal-it applied in an educational context!

As I mentioned in my previous post, we (Erik, Gonzalo and myself) attended the Chi conference. More concretely, we attended one session about "Tensions in Social Media" due to there was a very interesting paper called "Reveal-it!: The Impact of a Social Visualization Projection on Public Awareness and Discourse" and the fact that the authors of the paper were from my former university (UPF) and one of the co-authors was my assessor Andrew Vande Moere increased my interest to know more about it.

What was Reveal-it about?


I would strongly recommend you to read it. I ask in advance apologies to the authors whether I forget some important details about the paper, but summarizing this paper describes a set of experiments where energy expenses are visualized on an ambient display in different public spaces . The visualization is designed in order to increase the user awareness of energy consumption. The experiment also digs into how these public visualizations can trigger social discourse. Users that pass by the visualization can report their expenses and their information is added to the visualization. Such information is highlighted in order to attract the attention of the user and to trigger reflection. The information of the user can be compared with the expenses average of her/his neighborhood.

What did we do?

We wanted to test this application in our context, education. So we started to think about how we could apply a similar concept to our students. It could also be an alternative to our big table overview.

We wanted to test the application with our students who currently use StepUp! (a bit of explanation  here), Navi (a bit more here) and the activity stream (that aggregates all the activity of the course). So this evaluation could help us to understand how Reveal-it could complement our current work.

Ok, so... what do we have? We have students and we track different activity, tweets, blogs, time, badges... they can work in groups or individually. We wanted to test it with our current students and  the courses are ending... so they do not have to report new activity to the system... but still we wanted some interaction with the visualization with a second device such as highlighting the user activity...

Our first approach was to do the analogy between neighborhoods and groups. But we found two main issues:
  •  We had to create one visualization per activity. For instance, for our #chikul13 students, we created four different visualizations (per blog posts, blog comments, tweets and earned badges along the course). Reveal-it aggregates gas and electricity expenses because both are paid with the same currency, but the nature of our data is completely different and to find a common way to measure it was difficult.
  • Most of our #thesis12 students do not work in groups, so the analogy of neighborhood and groups did not work for this case study.
 We thought that to keep all the data on one unique visualization made more sense. So we forgot about the analogy and we went for a simpler approach. Neighborhoods are different activities such as tweeting, blogging, commenting, spending time and earning badges. And each user should be represented in every "neighborhood". In this way, the students can see in a glance how their efforts are distributed compared with the others and the mean.

What was the result?


So after Sam and I tweaked the code, this was the result for our #thesis12 and #chikul13 students:



And the students could highlight their usernames using a simple mobile web app:


What did we do?

We evaluated the tool during a poster demo session of our #thesis12 students where also our #chikul13 students were invited to participate. In fact, we did two simultaneously evaluations. Sven that focuses on how we can enhance collaborative reflection with multi-touch and big devices evaluated his tabletop app.

We projected the two visualizations, each one in different walls.

What were the first reactions from the students? A couple of #thesis12 students asked me before the start of the poster session if it was a #chikul13 project or something like that. I told them that the visualization was displaying their data, one stayed for the evaluation, the other almost ran away...

After this, I stayed a bit away of the visualization and I didn't see any student who paid a lot of attention to the visualizations. Consequently they didn't even read the text where was explained that they could interact with the visualization. So before losing the opportunity, I started to ask some students if they could evaluate the visualization.

I introduced them the tool explaining that it relies on a concept of public and ambient displays but that it enables interaction through a web mobile app. I let them use the app with my own mobile and they started highlighting their username and afterwards others usernames.

I highlight some findings got from the interviews (10 interviews):
  •  Two groups of two people understood faster the visualization and they had some fun (at least they laughed) while they were comparing with each other.
  • All the individuals needed a bit of help to understand completely the visualization.
  • Three person understood the bars as a chronological representation of their own activity. Each bar was a week instead of a user. It can be a bias because so far StepUp! and Navi they use weeks as a granularity level to represent the data.
  • But the most important perception, at least from my point of view, is that most of them expect to interact with the visualization, for instance, in order to highlight the outlier users. However, when I inquiry them about if they think that was a required feature, they replied that maybe to compare themselves with the mean was already ok.
  • Also I asked them what would they rather prefer, whether the big table visualization or Reveal-it. The opinion was quite divided because the big table overview gives them more information, however everybody agreed that it was a fancier and nicer visualization that enables you to get a quick status of your activity compared with the others.
Still we have to research a bit more about it. But Reveal-it is considered more or less equally useful than StepUp! and Navi. On the other hand the #thesis12 students would feel less confident with such public visualizations used in public spaces than #chikul13 students. And they do not find that interacting with the visualization with a second device (in this case a web mobile app makes a lot of sense).


Monday 29 April 2013

[weSPOT] Personal informatics, workshop, chi conference and weSPOT

This weekend weSPOT (as myself on behalf of the project) attended to the workshop of personal informatics at Chi conference. It was nice how the organizers set up this workshop in a hackaton kind of way.

First we participated on the workshop madness session, a series of 2 minutes presentations where participants could introduce themselves and their work. 2 minutes is a really short period of time but enough to make the others understand what are you working on and what do you expect from the workshop. Sure! It requires pragmatism, simplicity and left aside a bit of the narcissism that characterize to every good (and not so good) researchers ;).

In fact, it was one of the issues that Mara Balestrini brought to the discussion, are personal informatics promoting narcissism? Personal informatics are pretty much about self-knowledge, but this tools also should promote empathy among the users... it is not a matter only to understand yourself, it's a matter also to understand the others. I really liked this kind of reasoning, because in our topic, learning is also important. In fact, we expect that students understand themselves through understanding their peers in the social context.

After this workshop madness session, we started our hackaton. We started to work in a project that we previously discussed through email. The members of my team were Mara Balestrini, Jon Bird, Christian Detweiler, and Mads Mærsk Frost. Basically, our team focused on how truthful are the answers when people replied to a survey due to a sociability bias. It has been a long topic discussed along the years and some people already proposed a simple solution for yes/no questions [1][2].

Jon Bird proposed to develop an app with this system. Basically the system relies on a very simple methodology, the user before answering a question has to flip a coin. If it's tail, you have to say the truth, if it's head, you have to reply yes by default. In this way, nobody knows if you has replied truthfully or not. However, statistically we know how many 'yes' we can drop from the sample and the rest are reliable 'yes'. The theory says that in this way, we can know the real percentages of the answers.

In order to demonstrate whether this system could be integrated in an app, we are going to deploy three different kind of surveys. One survey where the flipping coin methodology is not applied. Another where the user has to flip a physical coin. Finally, a third one where the user has to flip a virtual coin integrated in the system.

The ideal result would be that a social bias exists in the first one but not in the other two. But we'll see the results... we hope to deploy tomorrow during the conference.

Does someone wonder what kind of questions will be? We'll try to balance between very personal ones such as "have you ever had an affair?" and less personal ones where the social bias should be less.

We'll see what comes up from this very interesting workshop! Hope we can report something soon!

In the meantime, let's see if we can get some inspiration from this amazing conference!




Wednesday 6 March 2013

Navi, StepUp, OpenBadges and ¿Gamification?

It was a long time ago since my last post... but is always to get back to good habits...

Yesterday there was a really nice discussion in our HCI course where we are evaluating our Open Badges approach.

In this experiment several tools take part:
  • Navi: It's the dashboard to display the badges to our students. As you probably know, we are continuously iterating our prototypes and this is not an exception ;) So feedback is welcome! Btw, this app is developed by Sven Charleer who joined our team on January.
  • StepUp back-end: If you have read previous posts on this blog, you know that I am working on trackers and visualizing this information in a meaningful way for students (or at least I try)
  • Open Badges system: We rely on Mozilla Open Badges System to give the students the possibility to share their badges with the outside world through social networks.
  • Analytics layer (sorry, it does not have any URL): The backend that contains all the rules to award the badges.
  • Activity Stream of the course: Following the same concept of TinyARM that aims to increase the awareness on what others are reading, we merge the different activity streams of the course such as twitter, blogs and badges in the activity stream, offering to filter by the different actions.
What is the goal of this experiment? Are we gamifying the course?

Badges are game elements but they are representation of achievements. Some students claimed yesterday that applying gamification to master students was a bit childish... and I am aligned with this idea... there are even current research that goes against the gamification of learning because it breaks the real motivation of learning... everybody should have their own intrinsic and extrinsic motivation for learning... however how the teacher teaches the lesson is other point... if he does it dynamically, participative, collaborative or simply boring is up to him or some rules of the institution... and usually is up to the student, to attend the f2f lessons (except if they are mandatory), to be participative, etc... and learning analytics tools can be part of these decissions.

Learning analytics are other additional resource to help the students to steer their own learning process, but is up to the student to use the tools that we provide. We usually test our applications with bachelor and master students and our assumption is that they are autonomous students... they will become engineers and computer scientists soon... so our first assumption couldn't go in different direction.

So... What are badges for us?

Our assumption is that badges are a representation of achievements and a mean to reflect on what is going in the class.

If you (as a student) are not tweeting, commenting or blogging but you see that others are getting badges for it, it may trigger a question:
  • why is the teacher giving badges to the students? The answer is clear, we are encouraging positive behavior.
There are some badges considered as neutral, but they are given periodically. Theoretically, they want to increase the awareness of what you or other student has done. We could use some chart instead, whereas badges represents an achievement, visualizations rely on the user the cognitive effort to drive conclusions and we try to simplify this reflection process.

If someone finds a fun element in this process, it's great! It will increase the motivation and it usually have positive effects! But... Learning is already fun by itself!

And what are we trying to figure out from our students? Do they consider them useful? As a reflection mean, as motivational elements, as positive feedback... they decide and:

WE LEARN FROM THEM