Web scraping, data pipelines, Python automation, and privacy-focused infrastructure.
Cross-references 256,000+ well records from the New Mexico Office of the State Engineer against active land-for-sale listings. Identifies nearby wells within a configurable radius and compiles depth, water table, and well count data per parcel, this output can be customized based off the New Mexicos SOE well data format. For other state well data to property matching, reach out to me directly and I will quote you on a solution.
pyproj, then runs Pythagorean distance matching against well eastings/northings. Filters out zero-depth records, computes average well depth, min/max depth, average depth to water, and well count per listing. the output of this data is highly customizable and can easily be tailored to suit any individual clients needs.
Playwright-based scraper that intercepts Land.com's internal map-pins API responses, extracts structured listing data, and filters by acreage, price, and price-per-acre thresholds. Handles pagination and geographic bounding automatically.
page.on("response") to capture API payloads as the browser navigates, avoiding direct endpoint calls that trigger rate limiting. Filters listings to configurable criteria (e.g., ≤$5,500/acre, ≤$100k total, ≥1 acre). Feeds directly into the well data matching pipeline.
Full-stack ecommerce system for a phone retail operation. Self-hosted payment processing, cryptocurrency checkout, anonymous eSIM provisioning, and Tor-accessible storefront.
BTCPay Server for self-hosted Bitcoin/Monero payment processing. WooCommerce with MyCryptoCheckout as a fallback gateway. Keepgo eSIM API integration for programmatic SIM provisioning. Cloudflare with onion routing to serve Tor users.
Playwright and requests-based scrapers. Network interception, pagination handling, rate limit management. Comfortable with anti-bot workarounds and proxy rotation.
Cleaning and matching large datasets. Coordinate system transformations, spatial queries, CSV/JSON/GeoJSON processing. Government and public data sources.
Scripts that do real work — scrape, transform, match, and output. Not frameworks, not boilerplate. Practical tools built around specific problems.
Self-hosted services, payment processing, DNS, server configuration, Tor integration. Privacy-oriented architecture from the ground up.
Started out programming complex machines — linear and rotational motion, optimized toolpaths, coordinate systems as daily reality. From there into building live infrastructure for a business, then into data work through needing to answer a specific question. Most of my initial experiences with automation revolved around very step heavy and time consuming repetitive tasks, and needing to free that time up to be of better use in other areas of my businesses. I find a fair amount of enjoyment taking something that was inconceivable to automate and watching efficiency and productivity sky rocket as a direct result of my efforts.