Evan Welbourne

Evan Welbourne

Greater Seattle Area
9K followers 500+ connections

About

I lead Data at Figma, this includes machine learning, data science, analytics - as well…

Articles by Evan

Activity

9K followers

See all activities

Experience

  • Figma Graphic

    Figma

    Seattle, Washington, United States

  • -

    Seattle, Washington, United States

  • -

    Seattle, Washington, United States

  • -

    Greater Seattle Area

  • -

    Greater Seattle Area

  • -

    Bay Area and Seattle

  • -

    Greater Seattle Area

  • -

    San Jose, CA

  • -

    Palo Alto, CA

  • -

    Seattle, WA

  • -

    Greater Seattle Area

  • -

    Seattle, WA and Santa Clara, CA

  • -

    Berkeley, CA

Education

  • University of Washington Graphic

    University of Washington

    -

    -

    Worked with Prof Gaetano Borriello and Prof Magdalena Balazinska on the RFID Ecosystem project.

  • -

    -

  • -

    -

Publications

  • CrowdSignals: A Call to Crowdfund the Community’s Largest Mobile Dataset

    HASCA, Ubicomp

  • Centaurus: A Client-Based Scripting Framework for Mobile Crowdsensing

    Samsung Best Paper Award

  • Crowdsourced Mobile Data Collection: Lessons Learned from a New Study Methodology

    HotMobile

  • Specification and Verification of Location Events

    Pervasive

  • Building the Internet of Things Using RFID: The RFID Ecosystem Experience

    IEEE Internet Computing

  • Longitudinal Study of a Building-wide RFID Ecosystem

    MobiSys

  • Access Control Over Uncertain Data

    VLDB

  • Cascadia: A System for Specifying, Detecting, and Managing RFID Events

    MobiSys

  • Mobile Context Inference Using Low-Cost Sensors

    LoCA, Pervasive

  • Extracting Places from Traces of Locations

    WMASH

Patents

  • Remote immersive user experience from panoramic video

    Issued US 10,277,813

    This patent relates to creating a virtual reality user experience by stitching together video data from cameras (e.g., to create a field of view of 360 degrees, different camera phones at same event/concert/game, etc.). The video data is mapped to a three-dimensional model based on the locations of the cameras with respect to each other, and a portion of the model is rendered and delivered to a viewing device (e.g., VR headset, mobile phone). The user of the viewing device can look around the…

    This patent relates to creating a virtual reality user experience by stitching together video data from cameras (e.g., to create a field of view of 360 degrees, different camera phones at same event/concert/game, etc.). The video data is mapped to a three-dimensional model based on the locations of the cameras with respect to each other, and a portion of the model is rendered and delivered to a viewing device (e.g., VR headset, mobile phone). The user of the viewing device can look around the scene by moving the device (e.g., turning head right to left if device is being worn). This causes the device to render a different portion of the three-dimensional model and display another view of the scene.

    See patent
  • Automatic detection and analytics using sensors

    US US 20160162844 A1

  • Context-Aware Compliance Monitoring

    US US20150170501 A1

  • Context-aware hypothesis-driven aggregation of crowdsourced evidence for a subscription-based service

    US US 20150310347A1

  • Electronic system with privacy mechanism and method of operation thereof

    US US 20160004873

  • Methods and systems for on-device social grouping

    US US20150227611 A1

  • User Identification Based on Voice and Face

    US 14/750,895

    Other inventors

View Evan’s full profile

  • See who you know in common
  • Get introduced
  • Contact Evan directly
Join to view full profile

Explore top content on LinkedIn

Find curated posts and insights for relevant topics all in one place.

View top content

Add new skills with these courses