Menu Sidebar
Menu

Tai Toh

Enter a brief biography here by editing your profile →

    Microsoft announces that the Xbox is coming to the PC.

    From the Guardian’s piece called, “Microsoft to unify PC and Xbox One platforms, ending fixed console hardware“:

     During a press event in San Francisco last week, Spencer said that the Universal Windows Platform, a common development platform that allows apps to run across PC, Xbox, tablets and smartphones, would be central to the company’s gaming strategy. “That is our focus going forward,” he told reporters. “Building out a complete gaming ecosystem for Universal Windows Applications.”

    This is, he explained, the culmination of the company’s vision over the past year. In January 2015, Microsoft announced that it was bringing an Xbox app to Windows 10 PCs, allowing cross-platform play and a cohesive friends list across both platforms. Then, in November, the Xbox One was updated to be compatible with Windows 10, bringing a new interface and features to the console. In late-January, Microsoft chief executive Satya Nadella told attendees at the dotNet conference in Madrid that UWAs would be coming to Xbox One, but did not specify in what capacity.

    I actually predicted that Microsoft would do this eventually in Steambox vs. the Incumbants (Xbox One, PS4):

    Here are my bets:

    • Valves strategy will play out over the next 3-years.
    • Microsoft will make their Xbox One experience available as a digital download–you’ll be able to run your Xbox games on your Windows 8 PC.
    • PC OEMs will manufacture generic gaming consoles certified for Windows 8.x w/ the Xbox One Experience and Steam OS, but Valve will have the upper-hand because they will support game streaming to android and iOS mobile devices.
    • Game streaming from a single high-end PC to lighter / thinner clients will be norm by 2015.

    Valve’s Steambox strategy never really played out the way I expected (although they still have 6 months to make my prediction come true).  It was disrupted by the emergence of VR over the last 12-18 months.

    In general, the OEM Steamboxes are disappointing.  It makes sense for Microsoft to unbundle the Xbox One’s experience as consoles just don’t have the penetration that PCs do and growth has stopped in both industries.  They can buoy their media business by extending their reach into PCs.  With regards to generic hardware that can run both Windows and SteamOS, this is exactly what the OEMs are doing (e.g., Alienware’s Alpha is the same box for Windows as it is for SteamOS).  It’s unfortunate that SteamOS hasn’t caught on–I would blame the poor performance of Linux GPU drivers. There is little advantage to buying a SteamOS box–the Windows experience with Steam Big picture is just better and more versatile.

    Game streaming is still an emerging behaviour, and the forays into it by Steam (via their Steam Linkbox) and nVidia haven’t set the world on fire.  Sony, to their credit, are definitely making headway into this.  Things that hold this back is that this almost always requires a wired-infrastructure.  Wired ethernet is less common now a days and most homes are linked through single WiFi routers. Moreover, single purpose devices (like Steam Link) don’t appeal to general populace.  It needs more mass consumer features (e.g., Netflix, Youtube, etc.).

    Will the Internet of Things be the next green field?

    I’ve been looking at the MEAN.io stack technologies (it seems like the new hotness) and I can see this being the underlying the language for the “Internet of Things”.  IoT is a term that has slowly crept up into the consumer marketplace displacing the “home automation” trend, but its been in use in many other industries like supply chain and manufacturing for years.  It popped up in the consumer mindshare in a big way in this years CES and MWC.

    As we’ve seen the mobile space being dominated by the iOS and Android ecosystems, innovation takes place on those platforms, on their terms.  IoT is proving to be the next green field1.

    If you read any of Ben Thompson’s work at Stratechery.com, he wrote a wonderful article in 2014 called “The State of Consumer Technology at the end of 2014“, where he outlines what he calls the “Three Epochs of Consumer Tech”:

    An image from Ben Thomspon

    The Three Epochs of Technology

    Extending the model below, it’s clear why people like Apple, Amazon, Google, and challengers like LG, Samsung, Xiaomi, Huawei, etc. will be leading the charge into the IoT space over the next 1-3 years.

    Epoch PC Internet Mobile IoT
    Communications Email Facebook Messaging Voice?
    Work Office Google “Sharing” Uber, Amazon,
    OS Windows Web Browser Android / iOS AWS?
    Scale 10s of Millions 100s of Millions Billions 10s of Billions

    Extending that illustration, you can map the impact of connected household goods to tens of billions of connected devices; it’s an exciting opportunity.

    As with any new technology, there will always be unintended consequences.   As you increase the number of connected devices by a magnitude, you’ll inevitably introduce a completely different number of variables to the equation.

    Take the upgrade cycle for instance. The upgrade rate of home appliances are a magnitude longer than any phone or device.2  It’s not uncommon to keep an appliance for 10+ years.  Imagine how the industry will change during that time? Whole services and protocols will go in and out of fashion. Will appliance manufacturers keep the software up to date? Surely closed ecosystems will age out and introduce planned obsolescence–look to more open development platforms to keep devices up and running in the coming years.3

    A lot of interesting questions

    Who will provide the fabric of the IoT:

    • Clearly the underlying protocol is IPv6 in nature, but I don’t know of any emerging protocol that will take hold.
    • Cisco systems is positioning themselves for this with their Jasper acquisition and OpenDNS acquisition.

    Where is the hub for IoT?

    • Is the living room the centre of the connected home?  Both Microsoft and Apple seem to be positioning their bets there (with the XBox and AppleTV, respectively).  Google’s attempts in the TV space are half-hearted and their acquisition of Nest and DropCam only address the outer-edge of the a hub and spoke model.
    • Maybe there is no hub?  It’s centred around the smartphone?

    Who will provide the OS for IoT?

    • I think the emerging leader right now is Amazon and their Amazon Echo device.  I can see this being the UI for the OS. In general, it’s voice.  Facebook has planted the seed that the group messaging client will be the new operating system, but their idea of mining intent from messaging using bots and NLP doesn’t pass the smell test for me (people generally don’t want Facebook looking at their chats…).
    • From a platform basis it’s not so clear.  Everyone will have devices running their own software. Strong brands will attempt to create walled gardens (Google vs. Apple vs. Samsung).  In that sense, IoT will a horrible customer experience without some defined standards.  It will take something akin to the early W3C specifications, champions on both the private and standards sized to get to something usable in my opinion.

    Who are the Big players in the the IoT space?

    • Will it be Apple? Google?
    • I think it will be out of the hacker space from China.  Only they have the supply chain expertise necessary to build the necessary sensors into devices.

    What’s the big payoff for consumers?

    • Is it really important to have a toaster and fridge on the Internet via IPv6? No.  There isn’t.  This is a difficult problem for companies.  Communicating the value is difficult because it’s not very clear how all of these inter-connected devices will improve your life.  Examples need to be specific.
    • It’s all about the sensors and all about the software that will take that information and make it smart and insightful and enlightening–we’ve barely scratched the surface of this for consumer consumption, but we’ve seen this play out for years in in heavy and light-industry (e.g., Supply chain).

    1. To a lesser extent, you could say this with VR as well, but what is markedly different is the scale between the two areas.  We’re talking 10s of millions of devices vs. billions, respectively. 
    2. As much as I am enticed to upgrade my aging iPad3 (from 2012), with the new iPad Pro 9.7″ launched this week, I think I’ll wait one more version.  I think Tim Cook underestimates the upgrade cycle for the iPad.  It’s more akin to a PC and there is no cellular phone provider to offer upgrade subsidies to catalyze its upgrade cycle like with the iPhone. 
    3. I have the same concern with automotive software–it will be interesting to see how Tesla fares.  I think Apple Car and Android Auto is the future. 

    Virtual Reality Adoption: Market Size, Affordability, Ergonomics, and Shareability

    The big story of CES 2016 was the outstanding Virtual Reality demos by Oculus, HTC, and Samsung.  A lot of stuff on the technology and how amazing it is has already been written, but I wanted to touch upon some of the user adoption issues that I think these companies need to overcome in order to make this a mass-audience play.

    There basically 4 contenders emerging out of the CES 2016:

    • Oculus
    • HTC
    • Sony
    • Samsung

    We all know that Oculus is the front runner.  Facebook’s purchase of Oculus put them on the map. They are taking VR and pushing the platform forward with brilliant technology and engineering. The new comer is HTC. They’ve come out of no-where with their Vive headset and are showing what some are saying are more impressive technical demos than Oculus.

    The key thing for HTC’s Vive is its affiliation with SteamVR.  It gives them that reach towards hardcore gamers and a proven eCommerce platform to sell software off of.  Great move by HTC and Valve. Sony didn’t participate in CES 2016, but they have been showing off their Morpheus headset and are boasting a large gaming catalogue. Samsung’s focus is on their mobile devices with GearVR.  More importantly, Samsung’s take on VR is in the market today.

    I’ll admit it: I’m on the VR bandwagon. I bought a cheap Google Cardboard shortly after Christmas and I’ve put it through its paces.  All I can say is that even for $15 CAD, this stuff is real, it has tonnes of potential, and it won’t be going away too soon.  There’s real promise to the technology today (unlike the ridiculousness of the mid-90s version of VR as typified by Vr.5).  I want to pre-order to Oculus Rift, but I’d also have to spend a good chunk of change upgrading my PC. There are also a few things I find questionable about the mass-adoption of VR technologies.

    Is the market large enough?

    Sony didn’t participate in CES2016 this year.  They probably have the most to offer with their planned Sony VR headset being primed and ready for the Playstation 4.  With a 100 games at launch, this will be a significant coup for them.  Given an install base of over 36 Million PS4s in world-wide, it looks like the addressable market for Sony’s VR platform appear to be 2-4 times greater1 than Oculus and HTC’s share of compatible computers.

    As Jason Evangelho from Forbes.com says:

    GPU maker Nvidia estimates that when the Oculus Rift ships later this quarter, there will only be 13 million PCs that are able to run an optimized VR experience.

    Jason Paul, general manager of Nvidia’s Shield, gaming, and VR business, has insight into the hefty demands for gaming in Virtual Reality. Speaking to VentureBeat, he said: “If you look at your typical PC gaming experience, 90 percent of the gamers out there play at 1080p. For a smooth experience you don’t want to go below 30fps. Compare that to VR where the displays are about 2K, but you have to render closer to 3K, and you don’t want to go below 90fps. **It’s about a sevenfold increase in raw performance to render for VR versus traditional PC gaming.**”

    Meanwhile, we just learned that Sony’s $349 PlayStation 4 continues to sell briskly, with the company approaching 36 million units sold globally. Every PS4 sold is capable of running the PlayStation VR (formerly Project Morpheus) experience. On the most basic level, that means there are 36 million PS4 systems in the wild right now, capable of running an optimized VR experience (“optimized” since there’s only one platform with uniform specs to develop for).Jason Evangelho

    Is this a profitable business sustainable with only 10s of millions of users? Definitely a “yes” if the devices can be sold at a high margin, but…

    We all know that consumer tech is typically a race towards the bottom.

    If anyone is positioned well, I got to think that Samsung’s approach is the most valid.  With 100 of millions of handsets already compatible with GearVR, it’s gives a really immersive experience with a relatively low start-up cost.  While I don’t think Samsung will lead the race2, they will be remembered as the ones who really got the VR bandwagon rolling. Hell, they’re providing all the display tech anyway.

    New technology is always expensive

    I know that hardcore PC gamer will front the money for a headset that promises to offer a more immersive experience for their games, not too sure about console gamers.

    The cost is significant. Only a small percentage of PCs today meet the minimum requirements for consumer VR.  At ~$1500 USD for a headset and VR-ready computer bundle, I’m not sure if VR will spur people to upgrade their computers.  More importantly, there really isn’t any portable computing solution that allows people to even experience Rift or Vive.

    Quite frankly, households don’t have that many desktops anymore–I typically wouldn’t recommend a desktop to anyone purchasing a new computer.

    Wires suck

    Having your head tethered to a PC or gaming console feels a bit ridiculous and looks ridiculous.  As impressive as the Vive’s head and motion tracking appears to be, walking around with a cable tethered to your head isn’t exactly immersive.  Total health and safety hazard right there. It’s an ergonomics issue that will require significant engineering to overcome.

    The experience isn’t shareable

    Having a person holding the cables coming out of the back of your head as you walk around isn’t what I call communal.

    Not being able to easily share the experience with others will hold back adoption as well; they can’t leverage the network effects of the Internet and their growth will probably operate more like a SneakerNet. It can’t go viral very easily.  eReaders have the exact same problem with traditional book lovers. Traditionalists don’t see a reason to change over, but if you give them an eReader to use, the likelihood of conversion is much higher.3

    That said, the display technology is absolutely bleeding edge.  John Carmack discusses some of the issues that they are encountering in his 2014 Oculus Connect keynote.  I suspect it will probably drive a lot of the innovation in the consumer electronics-, pc-, and mobile-space for several years to come.


    1. Given that nVidia provided the number, they are probably excluding computer systems with AMD-based GPU cards.  So this is most likely underestimated on their part. 
    2. Samsung’s reliance on Google for the software will do them in.  Cheap Chinese OEM knock-offs of GearVR are already flooding the market. 
    3. I’d like to think that I know what I’m talking about here.  ^_^ 

    The fall of Oyster and Scribd: Subscriptions might become interesting again

    Interesting Tweet-storm (or at least that is what I think it is called) from Fahranheit Press, a fine purveyor of Crime Fiction.

    I encourage you to read the rant, as it’s a good indicator of the state of subscription services for eBooks.

    Below is a good summary of what happened.

    No publisher is going to turn down terms like that. So when a VC-backed entity like Oyster or Scribd says “okay you win, we’ll starve ourselves,” all the oxygen leaves the room for non-terrible discussions.1 It effectively set back any reasonable business terms on subscriptions for the past 2-years.

    Now that Scribd is circling the drain and Oyster is effectively gone, things get interesting again for the rest of the players to see if we can move this industry in the right direction, with business models that benefit the entire food chain.

    As for the staff of Oyster, congratulations on being “acqui-hired” by Google.  There are definitely worse fates than that.


    1. By “non-terrible”, I mean business terms where the distributor doesn’t lose their shirt in the transaction. 

    Ad-Blocking: users are revolting

    There has been a lot of great writing across the Web regarding advertising and the ethical nature of ad-blocking software with the recent release of iOS9 and its new Safari Content Blocking features.  Unsurprisingly, ad-blocking software quickly rose to the top of paid apps on iOS.

    Content providers quickly responded:

    image of cnet blocking mobile safari

    Is this what we really want?

    I’ve been running ad-blocking software for years.  It’s one of the first things I setup when I download Chrome or Firefox.  I do this as a public service to other people’s computers too.

    This has been coming for years.  Advertisers had been lulled into complacency with the advent of mobile browsers.  If mobile Safari or Chrome for Android had these features built-in from the very start–we’d probably would have seen the emergence of native ads sooner.  Apple’s own ad-platform probably would have taken off.  The fact that it’s taken them this long to realize it is their fault.

    I know that this is at the detriment of content creators.1  Working in the digital publishing space at Kobo, I understand how consumers don’t understand that digital production and delivery represents only a small part of the production life-cycle for books and journalism.  Just because the consumption channel is digital doesn’t mean that there is a massive savings for the content producer.2

    Ben Thompson who notes:

    This didn’t happen by accident; to BuzzFeed founder and CEO Jonah Peretti’s credit, BuzzFeed was built from day one to be abusiness that earned money the old-fashioned way: by being better at what they do than any of their competitors.

    Publications that seek to imitate their success — and their growth — need to do so not simply by making listicles or by focusing on social. Fundamentally, like BuzzFeed, they need to start with their business model: the future of journalism depends on embracing what far too many journalists are proud to ignore.

    And Seth Godin absolutely nails the current consumer ambivalence to how hostile ads have become3:

    And advertisers have had fifteen years to show self restraint. They’ve had the chance to not secretly track people, set cookies for their own benefit, insert popunders and popovers and poparounds, and mostly, deliver us ads we actually want to see.

    Alas, it was probably too much to ask. And so, in the face of a relentless race to the bottom, users are taking control, using a sledgehammer to block them all. It’s not easy to develop a white list, not easy to create an ad blocker that is smart enough to merely block the selfish and annoying ads. And so, just as the default for some advertisers is, “if it’s not against the law and it’s cheap, do it,” the new generation of ad blockers is starting from the place of, “delete all.”

    This problem will only get worse for content publishers that rely on today’s advertising platforms to generate revenue.  Keep in mind that the companies advertising their wares aren’t going to hurt; they only pay for ads that are seen, not ads that are blocked.  It’s the companies with large active user bases who are willing to monetize them are going to win out.  The Facebooks, YouTubes and the Snapchats are going to see a pretty big increase in the advertising spend of companies–from a limited pool no less.4  The kicker is that they don’t even produce any of the content themselves.5

    On the content publishing side, we’ll see a lot more consolidation as publishers-of-all-sorts begin to realize that their captive audience is too small to fund their production-line.  Book publishers will also need to re-evaluate the entire creative life-cycle; everything from how agents work with up-and-coming authors, to how content is produced, marketed and sold to consumers.  Thats a scary proposition for industry that has remained remarkably unchanged over the last century, but for those leading the charge, it’s quite exciting.  For me, I’ve only been involved in digital publishing for less than a decade and I’m still amazed by it.

     


    1. This doesn’t keep me up at night. 
    2. I’ll be the first to admit that there is probably some fleecing that is going on at the big 5.  They over-value their back-catalogue in an age where everything is about the “now”.  There is also some disruption is occurring in this space that indicates what the market is willing to bare (for better or for worse) with regards to the pricing of eBooks–most notably in digital-only, self-publishing platforms like Wattpad, Kobo Writing Life and KDP
    3. If there is one problem that I am trying to solve at Kobo it is this quote from Godin: “Commodity products can’t expect to easily build a profitable ‘brand’ with nothing but repetitive jingles and noise.” 
    4. Who knew that the total US spend for advertising has held steady at ~1.29% GDP? Source.  All retailers have done is shift from one channel to another. 
    5. In the next 2-years, Facebook will begin producing their own content.  They will probably acquire BuzzFeed or something. 

    Product management fundamentals: The next feature fallacy

    Joshua Porter writes:

    When your product is growing and ramping up new customers, it’s easier to focus on new compelling features that increase engagement.  It’s also easier to ignore dissatisfaction with the increasing base of existing customers because your growth rate exceeds your churn.

    Things start to fall apart though when your growth starts slowing down.  It’s easier to focus on new exciting features that you think will turn back the tide and you fall into what is mentioned above.  It makes sense in hindsight–you and your team are used to the pace and cadence that comes with new feature development. The problem though is that the reach of the feature becomes smaller over time. Features that assume a specific level of engagement will, more often than not, fall flat because discovery of the feature will never be 100%.  If you’re lucky, it will reduce churn.  It will not increase growth.

    Lifting up the covers and opening up the closet often reveals things like dust bunnies and skeletons.  No one, and I mean no one, likes to work with that stuff, but it’s a necessary part of building a great product or service.

    Andrew Chen writes a great response to the tweet entitled, “The Next Feature Fallacy: The fallacy that the next new feature will suddenly make people use your product“.  It’s a great read and I especially like this quote:

    How to pick the next feature
    Picking the features that bend the curve requires a strong understanding of your user lifecycle.

    First and foremost is maximizing the reach of your feature, so it impacts the most people. It’s a good rule of thumb that the best features often focus mostly on non-users and casual users, with the reason that there’s simply many more of them. A small increase in the front of the tragic curve can ripple down benefits to the rest of it. This means the landing page, onboarding sequence, and the initial out-of-box product experience are critical, and usually don’t get enough attention.

    It’s a great read.

    Concentrate on the things that matter.  Fix the stuff affecting the majority of your customers today.  Get your analytics up and running so that you understand your customer life cycle.  Most importantly of all, make sure that everything you do continues to drive towards the vision you have for your product (and it’s okay to change and pivot if you really have to).

    -T

    Hackintosh thoughts

    File this under the the First-World-Problems Dept.

    I have owned and used Apple computers since 1996. Here is the list:

    1. 1996: The first was shared with my brother, an Apple Performa 6400.
    2. 2002: iBook G3 600 MHz
    3. 2007:  15-inch MacBook Pro, Core 2 Duo (Santa Rosa)
    4. 2009: Late-2008, 15-inch MacBook Pro, Unibody

    I’m generally happy with my experience, although it hasn’t been a smooth ride…the iBook G3 had a smelly keyboard and a DVD drive that wouldn’t stay closed.  My MacBook Pro Santa Rosa need a power-inverter replacement, fan replacement and the firewire port didn’t work–it was a lemon that Apple graciously replaced with a late-2008 MacBook Pro whose network port failed the first time I plugged it in at Kobo.

    Aside: I think it was the network there…from what I know, at least two other late-2008 MacBook Pros were affected.

    I still use my MacBook Pro (with upgraded 8GB ram, 256 GB SSD with a 2nd hard drive in the original combo drive slot). I’m amazed that I’ve been able to keep it running this long.

    This doesn’t include the slew of work computers that I have had (MacBook Pros, MacBook Airs, etc.).  Two of which exhibited overheating, but I digress–this wasn’t suppose to be a post about my poor experience with Apple Hardware.  I love the stuff.  Nothing, next to Lenovo ThinkPads, come close to the build quality that Apple puts out (but the ThinkPads are butt ugly).

    The reason why I am writing this is that I have an itch again to build a new computer.  In Mid-2013, I built an ESXI Whitebox to experiment in some hardware virtualization. I recently pulled that box out of the basement and handed it to my brother because I wasn’t using it.  Recently, I’ve been looking at my Hackintosh and when I do that, I often think of my long history with Apple hardware and software  and the underlying motivation I have to build them rather than just buying a real mac.

    In 2009, I convinced Jen that I could build a Mac myself using some Hackintosh guides.  I built a nice Quad Core Q9550 machine.  Three years later, I upgraded my Hackintosh build based on an i5 5370K.  I still use this today in my office as my photo workstation.

    Running a Hackintosh is not without its faults.  My video card will freeze and lock up the computer.1  I’ve never bothered to get sleep working (although I know it can work).

    It’s more cost effective than buying an iMac if you already have a good monitor, keyboard, etc, but generally more of a pain in the ass to maintain.

    After briefly flirting with ESXI on an AMD 8350 build, I’m itching to build another Hackintosh again.  The biggest change in the “scene” is the emergence of Clover EFI Bootloader. Other than that, I see the same issues that I’ve dealt with for the past 6 years:

    • Sound doesn’t work (get a USB sound card…)
    • It won’t boot (check your hardware configuration, boot flags, .kext files)
    • Power management doesn’t work
    • System updates borked the install
    • Facetime and iMessage doesn’t work

    All things that are easily troubleshooted–much easier if you use a vanilla-based install from a legitimate Mac.

    I don’t think cost is much of a driver anymore in the Hackintosh scene.  Six years ago, Mac hardware was at a significant premium, but the gap has mostly narrowed.  It really comes down to the folks who want a Mac that is more powerful than the Mac Mini, but not tied to a built-in monitor that the iMac has.  Count me as one of those users.

    However, it’s 2015 now and even the top-of-the-line Retina iMac is only ~13% faster than the comparable Retina MacBook Pro.  That’s barely above the threshold of noticeability.  In some cases, the iMac performs better than the Mac Pro.

    This is in stark contrast to the newest Mac Mini with its max CTO configuration (a dual-core i7) performing at 50% that of the iMac2.  My current Hackintosh, when over-clocked, is only 15% slower in comparison to the latest and greatest.  Not bad for a 3-year old computer.

    Mind you, the Hackintosh scene is pretty small (I would say that we’re talking about thousands of people…) and I doubt that Apple will ever do anything to stop people from building them, but you got to wonder if this is even worth it anymore?

    Based on what I’m seeing, the only real spot where I see Hackintoshes being relevant is if you do audio engineering or movie editing and you need to supply your own hardware.  There are some use cases for 3d rendering, more so if you are willing to spend for a workstation graphics card.  Alternatively, if you want to explore the platform, but don’t have access to Apple hardware, a Hackintosh is a good option to explore.

    I can’t even recommend the dual-boot option.  It’s easier to get separate Windows computer if you want to do some gaming.

    Will I build another?  Doubtful. I think I’m past that phase of my life. Should I retire my Hackintosh…maybe. Hard to say whether I go with a Retina MacBook Pro or the new 5K iMac.

    Tai


    1. I have since rectified this by installing another video card. 
    2. In multi-core benchmarking.  Single core performance difference is negligible. To be honest, I’m kind of disappointed. 

    Lightning does strikes twice – Linux and Git

    We all know that Linus Torvalds is the father of the Linux kernel. It’s the guts of an Operating System that can be found powering a multitude of devices, from the majority of smartphones and tablets (android), the majority of servers that power the Web, the embedded OS for the Internet of Things (IoT), Smart TVs, Kobo eReaders and even some PCs and Laptops.

    What many forget is that Linus is also the father of Git, the most widely used  source control management tool in use today developers all over the world.

    While it’s easy to argue that the Linux Kernel will be what Linus is remembered for–there are other players that have also contributed to Linux’s success.  It could be said that the broader impact he has made to computing and development will be Git (which just celebrated it’s 10-year anniversary this week).

    In my mind, they are accomplishments of equal scale.  That is just a rarity.  Simply amazing.

    HP T610 Plus and pfSense

    When I had set up my Watchguard Firebox x550e, I replaced the two 40mm fans with silent models.  I also swapped the PSU with a 90W pico PSU to make a nearly silent system.

    One of the replacement fans gave out and starting grinding a few weeks ago, so last week I replaced it with an HP T610 Plus coupled with an Intel i350 T2 dual-gigabit ethernet card.

    Default Install pfSense 2.2 and no additional config flags.  It runs at 17-19 Watts idle.

    The T610 has 4GB RAM, 16GB MLC SSD, and an embedded 1.6 GHz AMD G-T65N dual-core processor. So it handles all my needs without breaking a sweat.  60 Mbit of VPN traffic (one way) barely even registers.

    I haven’t done any iperf tests yet, but it should be equivalent to some of the dual-core Intel Atom boxes people use. So probably ~750 MBit and 150+ MBit over VPN.

    Tai

    Thoughts on 2014 and 2015

    So I’m starting to see end-of-year wrap-ups and predictions for 2015.  It’s always good to take a look a back on what is happening in the industry, epecially at Kobo.

    Largely, a lot of the stuff I am citing is predicated on Mary Meeker’s “State of the Internet, 2014 ed” that she puts together for KBPC (May 2014). If you haven’t gone through it, I recommend that you do.  It’s a good primer for some of the stuff espoused in the predictions for 2015.

    Ben Evans from Andreessen-Horowitz has a great presentation called, “Mobile is eating the world.”  It was released in Oct 2014 and presents an interesting stack of data that basically confirms some of the forward looking trends in MaryMeeker’s report.

    The competition for consumers time will become more ferocious.  The rise of messaging platforms is indicative of this–the “sipping” of conversation and attention spans will increasingly favour short-form content and will be a big challenge for Kobo (and our competitors) as we require a magnitude-higher level of engagement to consume our content.  I’m reminded of this NYT op-ed from 2012: “The flight from conversation.”

    That said, the quality of short-form content may well be improved in 2015.  Well-written and well-designed content is starting to pop-up, with places like Medium and Quartz leading the way.  These networks will be the ones pushing innovation on the discovery problem and may lead to some interesting applications in the stuff my team design and manages.

    On the hardware side, all I see is “sensors, sensors, everywhere”…with the Samsung Galaxy S5 pushing 10 different sensors (Gyro / fingerprint / barometer / hall (recognizes whether cover is open/closed) / RGB ambient light / gesture /heart rate / accelerometer / proximity / compass).  All of this has the potential to be collected, mish-mashed into something useful (not sure what this is… yet).  Is quantified self really a new industry are we navel gazing?

    I don’t expect this to change much in 2015, although I feel we’ll be hitting the “trough of of disillusionment” very shortly, as companies will struggle to bring meaning to all the data collected and the diminishing returns/insight/usefulness consumers will see will probably trip some alarm bells with regards to privacy and security.  In addition to all of this, I feel that the industry is really just waiting for what Apple will do.

    Steven Sinofsky, of Windows 8 fame, penned an interesting op-ed for re/code:  “Forecast: Workplace trends, choices and technologies for 2015”.  Not necessarily applicable to me at Kobo, but you see the same trends beginning to move into the enterprise space.

    While 2014 was a banner year for Kobo, the competition around eReading continued to be fierce. Oyster and Scribd entered the market offering a distinctly different business model. These companies continue to see success with acquiring book rights and continuing to grow their userbase off VC-backed capital. Wattpad continues to operate without any strong competitor in the serialized, self-published space, capturing the next generation of heavy readers and authors. New eReading startups like Aerbooks and Glose are entering the market offering differentiated experiences touting different solutions in order to address the persistent discovery issue of, “What to read next?”

    Incumbents did not site idle as well. Google Books has incrementally improved their android experience, offering parity with Apple, Amazon and Kobo offerings and improved non-fiction reading experience. Amazon, with amazing agility, rolled out it’s Kindle Unlimited program to match suit with the likes of Oyster and Scribd. Apple’s bundling of iBooks with iOS8 has further eroded the iOS platform for virtually every eBook retailer on the market. Latest reports indicate that the bundling of iBooks into iOS is adding as many as 1 Million new users a week. Barnes & Noble continue to play in the market, but saw little, if any product updates and I suspect will most likely be a non-factor in 2015.

    On a side note: if you haven’t listened to “Serial”—I wholly recommend it.  It’s great story-telling.

    Newer Posts
    Older Posts

    Pixels & Widgets

    A blog by Tai Toh