Now that the first round of session selection is wrapped, let’s talk about what worked well and what didn’t. I’ll go first because, uh, I’m the author.
What I think worked well:
Rapid feedback for speakers – there were over 200 comments left on 45 abstracts for an average of ~4 comments each. Some of ’em generated a ton of feedback, and presenters were able to refine their abstracts early on.
Attendees participated in the process – for this first event, we let both anonymous and logged-in users leave ratings. I was originally worried that we wouldn’t have enough logged-in user turnout, and that we’d need anonymous ratings to get a significant sample. That wasn’t the case – we got 2,788 ratings (average of 62 per session, mean of 50), most of which were from logged-in users. (That doesn’t mean the ratings process is the right way to go, though.)
The WordPress infrastructure worked – for a version 1.0 event, man, this thing worked pretty well. There’s a lot of small workflow improvements I wanna make over time.
What I think needs to be tweaked:
Let’s split session submission into 2 phases – this brilliant idea came from Adam Machanic in the forums:
Phase 1: Session Proposal Review – speakers can submit abstracts anytime they want (like even right now for future GroupBy events.) The goal in this phase is to help speakers refine their abstracts, and session editing is encouraged during this phase. Ratings on Topic & Abstract will continue, but only Good/OK/Needs Improvement, and a comment will be required to leave a review. Nothing will be done with these ratings – they won’t be shown on a leaderboard, won’t be shown during the next phase, and they’re just to help the speaker improve their craft.
On a certain date, we lock all currently submitted sessions and start….
Phase 2: Attendee Rating – no new sessions or edits for 2 weeks. Logged-in users can leave ratings (no anonymous ones per the forum discussion), and your name won’t be shown publicly with your rating. The top X sessions get into the event, as with this past round.
Now I need your help on figuring out a rating mechanism:
- Option A, Star Ratings Qty * Score: what we used this time, but only give them one category (likelihood of attending). If we pick this, I know a lot of speakers want to make comments required when voting happens, but since the name & comment would be shown publicly, I’m not sure that’s a good idea. (Although it’d have an interesting side effect – if you didn’t like a session, you just wouldn’t vote for it.)
- Option B, Pre-Registration: forget ratings, and let users just register for sessions. This way you can register for any sessions you’d want to attend. I don’t think this is gonna work – say we get two Power BI sessions on identical topics. Users will just register for both, and we won’t have a good way of knowing which one they want more.
- Option C, You Only Get 10 Votes: you see a list of all the abstracts, and you only get to pick 10 that you want to attend. The sessions with the most votes win. (This is my favorite right now.)
I’m open to other ideas too. (Not average rating score alone though – someone with just a handful of 5-star votes could win, and that wouldn’t build a conference with widespread appeal.)
Next thing to improve – registration logistics – this isn’t an issue yet (Dec 19th), but it’s going to be fast. GoToWebinar maxes out at 1,000 simultaneous attendees on a live broadcast, and I know we’re going to hit that on some of these sessions. I want to make sure that if you voted for a session, then you’re going to be on the guest list for sure. That means in the rating phase, Option B and Option C are much more appealing to me than just pure 5-star scores.
Whaddya think? (I’m onsite with a client this week so I won’t be chiming in until after business hours, but folks can have a vibrant discussion here in the comments.)