<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://yorko.github.io/atom.xml" rel="self" type="application/atom+xml" /><link href="https://yorko.github.io/" rel="alternate" type="text/html" /><updated>2025-12-05T10:00:34+00:00</updated><id>https://yorko.github.io/atom.xml</id><title type="html">Yury Kashnitsky</title><subtitle>Yury Kashnitsky&apos;s view on machine learning popular science, career development, programming. soft skills, etc.</subtitle><author><name>yorko</name></author><entry><title type="html">Google Antigravity: first impressions</title><link href="https://yorko.github.io/2025/google-antigravity-first-steps/" rel="alternate" type="text/html" title="Google Antigravity: first impressions" /><published>2025-11-25T00:00:00+00:00</published><updated>2025-11-25T00:00:00+00:00</updated><id>https://yorko.github.io/2025/google-antigravity-first-steps</id><content type="html" xml:base="https://yorko.github.io/2025/google-antigravity-first-steps/"><![CDATA[<p>First of all, it’s sci-fi stuff, of course. Just imagine showing this to the version of yourself from 5 years ago.</p>

<p>There are at least three differences between <a href="https://antigravity.google/">Google Antigravity</a> and a standard IDE with a coding agent hovering over it:</p>

<ul>
  <li>
    <p>You can run multiple agents in parallel. Assign frontend to one, backend to another, and research to a third. Then you just come back to check the result;</p>
  </li>
  <li>
    <p>Antigravity launches the application in a sandbox itself, tests it just like a human would, reports on what worked and what didn’t, and then writes a report in a “Walkthrough” file. That is exactly what is happening in the video below: when the browser is highlighted with a blue border, that means Antigravity has taken control;</p>
  </li>
  <li>
    <p>Nice little touches like separate files for Task, Implementation Plan, and Walkthrough; the agents’ thoughts and actions are even more transparent.</p>
  </li>
</ul>

<p>You can also watch <a href="https://www.youtube.com/watch?v=nTOVIGsqCuY">this video</a> and try to replicate it.</p>

<p>Below we’ll run Antigraviry with one cool task that I’ve used for quite a while when teaching Python.</p>

<h2 id="prey-and-predators">Prey and Predators</h2>

<p>As an example, let’s take “Prey and Predators”, a.k.a. “Law of the Jungle” which is a simplified version of Conway’s <a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life">Game of Life</a>. This used to be a great take-home assignment in an interview series or a university course, and now it can be vibe-coded in minutes.</p>

<p><a href="https://github.com/Yorko/prey_predators_cool_programming_task">Here</a> is a starter repository.</p>

<h3 id="rules-and-task">Rules and Task</h3>

<p>Rules:</p>

<ul>
  <li>
    <p>A predator can eat an adjacent prey or simply move to an adjacent cell.</p>
  </li>
  <li>
    <p>Prey can also move to an adjacent cell.</p>
  </li>
  <li>
    <p>If a predator does not eat anything for a certain amount of time, it dies.</p>
  </li>
  <li>
    <p>After specific time intervals, predators and prey reproduce if there is a neighboring empty cell. The offspring occupies the free cell.</p>
  </li>
</ul>

<p>The current state of the ocean should be displayed on the screen, ideally using a graphical user interface (GUI). The simulation ends either after a certain number of iterations or when all predators or all prey have died.</p>

<p>Use this model to test the hypothesis of cyclical populations of predators and prey.</p>

<h2 id="vibe-coding-prey-and-predators-with-antigravity">Vibe-coding Prey and Predators with Antigravity</h2>

<p>For a prompt, I simply gave:</p>

<p><em>Implement a solution for this task <a href="https://github.com/Yorko/https://github.com/Yorko/prey_predators_cool_programming_task">https://github.com/Yorko/prey_predators_cool_programming_task</a> in a modern tech stack, with Python backend and Next.js frontend</em></p>

<p>Antigravity opened the browser to read through the repository (that’s already pretty cool), then came up with a Task and an Implementation plan.</p>

<div style="text-align:center"><img src="/images/20251125-google-antigravity-first-steps/task.png" width="500px" /></div>

<div style="text-align:center"><img src="/images/20251125-google-antigravity-first-steps/implementation_plan.png" width="500px" /></div>

<p>Then the app prompts the user to review the implementation and proceed (it’s also possible to provide comments to both, in a code review style). After a couple more approvals, it generates the whole codebase and launches the frontend and backend services (it also updates the Task tab on the go).</p>

<p>Finally, Antigravity opens the browser to test the application.</p>

<div style="text-align:center">
  <video width="700" controls="">
    <source src="/images/20251125-google-antigravity-first-steps/antigravity_prey_predators.mp4" type="video/mp4" />
    Your browser does not support the video tag.
  </video>
</div>

<div style="text-align:center"><img src="/images/20251125-google-antigravity-first-steps/simulation_running.png" width="500px" /></div>

<p>First impressions are fascinating. It’s magical to see the agent live testing the application.</p>

<p>Whether Antigravity is of much help in real projects (as compared to tools like Gemini CLI or similar) – we’ll see.</p>]]></content><author><name>Yury Kashnitsky</name><email>yury.kashnitsky@gmail.com</email></author><summary type="html"><![CDATA[First of all, it’s sci-fi stuff, of course. Just imagine showing this to the version of yourself from 5 years ago.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220251125-google-antigravity-first-steps/teaser.png%22%7D" /><media:content medium="image" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220251125-google-antigravity-first-steps/teaser.png%22%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Tutorial: YouTube summarization with Gemini and Google Cloud Run</title><link href="https://yorko.github.io/2025/youtube-summarizer-cloud-run/" rel="alternate" type="text/html" title="Tutorial: YouTube summarization with Gemini and Google Cloud Run" /><published>2025-05-19T00:00:00+00:00</published><updated>2025-05-19T00:00:00+00:00</updated><id>https://yorko.github.io/2025/youtube-summarizer-cloud-run</id><content type="html" xml:base="https://yorko.github.io/2025/youtube-summarizer-cloud-run/"><![CDATA[<p>Gemini is pretty good with YouTube analysis. You can just drop a youtube link into <a href="https://gemini.google.com/">gemini.google.com</a> and ask to provide a summary. Here is an example for <a href="https://youtu.be/jCTvblRXnzg">https://youtu.be/jCTvblRXnzg</a> – a brief, meme-heavy overview of Google’s AlphaEvolve:</p>

<div style="text-align:center"><img src="/images/20250519-youtube-summarizer-cloud-run/gemini_yourube_summarization.png" width="500px" /></div>

<p>Other LLMs, not equipped with YouTube tools, might not be able to fetch the video content provided only with a link:</p>

<div style="text-align:center"><img src="/images/20250519-youtube-summarizer-cloud-run/openai_yourube_summarization.png" width="500px" /></div>

<p>(funny enough, this particular LLM braggs about GPT-4o instead of Google’s AlphaEvolve).</p>

<p>Notes:</p>
<ul>
  <li>While Gemini is a multimodal AI capable of understanding images and audio, its YouTube summarization feature primarily focuses on the textual content. It doesn’t typically analyze visual cues or nuances in the speaker’s tone to generate the summary;</li>
  <li>Summarizing long videos might be a challenge, for such tasks it’s better to first cut the video in chunks.</li>
</ul>

<h2 id="goal-of-the-tutorial">Goal of the tutorial</h2>

<p>Let’s build and deploy a web application that summarizes YouTube videos using Google’s Gemini and deploy it with Google Cloud Run.</p>

<div style="text-align:center"><img src="/images/20250519-youtube-summarizer-cloud-run/youtube_summarizer_interface.png" width="800px" /></div>

<p><em>This tutorial is a modernized version of the code lab <a href="https://codelabs.developers.google.com/devsite/codelabs/build-youtube-summarizer">Build a YouTube Summarizer Codelab</a>.</em></p>

<h2 id="prerequisites">Prerequisites</h2>

<ul>
  <li>Python 3.x;</li>
  <li>Google Cloud SDK (<code class="language-plaintext highlighter-rouge">gcloud</code>) installed and configured;</li>
  <li>A Google Cloud Project with billing enabled;</li>
  <li>Required APIs enabled in your Google Cloud Project (Cloud Build, Cloud Run, IAM, Vertex AI, etc. - refer to deployment scripts for specifics);</li>
  <li>Docker (for local building if not using Cloud Build).</li>
</ul>

<h2 id="installation--setup">Installation &amp; Setup</h2>

<ol>
  <li>
    <p>Open terminal and clone the repository with code for this tutorial;</p>

    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> git clone git@github.com:Yorko/youtube-summarizer-cloud-run.git
</code></pre></div>    </div>
  </li>
  <li>
    <p>Install <code class="language-plaintext highlighter-rouge">uv</code> – the modern Python dependency manager:</p>

    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> pip <span class="nb">install </span>uv
</code></pre></div>    </div>
  </li>
  <li>
    <p>Create a virtual env:</p>

    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> uv venv
</code></pre></div>    </div>
  </li>
  <li>
    <p>Install Python dependencies with <code class="language-plaintext highlighter-rouge">uv</code></p>

    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>uv <span class="nb">sync</span>
</code></pre></div>    </div>
  </li>
  <li>
    <p>Configure Google Cloud:</p>
    <ul>
      <li>Set your project ID: <code class="language-plaintext highlighter-rouge">gcloud config set project YOUR_PROJECT_ID</code></li>
      <li>Ensure your user account or service account has the necessary permissions (e.g., Cloud Run Admin, Cloud Build Editor, Service Account User, Vertex AI User).</li>
    </ul>
  </li>
</ol>

<h2 id="local-usage">Local usage</h2>

<ol>
  <li>Run the Fast API backend:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> make run_backend
</code></pre></div>    </div>
  </li>
  <li>Run the Streamlit frontend (in a different terminal tab/window):
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> make run_frontend
</code></pre></div>    </div>
  </li>
  <li>Access the application in your web browser: <code class="language-plaintext highlighter-rouge">http://localhost:8501</code></li>
  <li>Enter a YouTube video link (e.g. this one https://youtu.be/jCTvblRXnzg on AlphaEvolve by DeepMind), optionally add a prompt, and click “Generate Summary”.</li>
</ol>

<h2 id="deployment-to-google-cloud-run">Deployment to Google Cloud Run</h2>

<p>The project includes a script to automate the build and deployment process.</p>

<ol>
  <li>Review and modify <code class="language-plaintext highlighter-rouge">build_n_deploy.sh</code>: Update <code class="language-plaintext highlighter-rouge">PROJECT_ID</code>,  <code class="language-plaintext highlighter-rouge">SERVICE_NAME</code>, <code class="language-plaintext highlighter-rouge">DEPLOY_REGION</code>, and <code class="language-plaintext highlighter-rouge">SERVICE_ACCOUNT</code> variables as needed for your environment.</li>
  <li>
    <p>Build the container image using Google Cloud Build:</p>

    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make build_cloud_image
</code></pre></div>    </div>
  </li>
  <li>
    <p>Deploy the service to Cloud Run:</p>

    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>make deploy_cloud_run_service
</code></pre></div>    </div>
    <p>This command deploys the container image built in the previous step, configuring the service account, minimum instances, memory, and allowing unauthenticated access by default. The script will output the URL of your deployed Cloud Run service. Visit this URL to test your deployed YouTube Summarizer!</p>
  </li>
  <li>
    <p>In case your organization doesn’t allow unauthenticated access, you can proxy the service to localhost:</p>

    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> gcloud run services proxy <span class="k">${</span><span class="nv">SERVICE_NAME</span><span class="k">}</span>
</code></pre></div>    </div>
  </li>
</ol>

<p>Just like with locally run application, this will open the app at <a href="https://localhost:8080/">https://localhost:8080</a>.</p>

<h2 id="bonus-challenges-optional">Bonus challenges (optional):</h2>

<ul>
  <li>Explore the <code class="language-plaintext highlighter-rouge">scripts/enable_oauth_for_cloud_run.sh</code> script. Understand how it sets up a Load Balancer and Identity-Aware Proxy (IAP) to restrict access to authenticated users. Try implementing it for your service;</li>
  <li>Modernaize the app: split front &amp; back into different services and use <code class="language-plaintext highlighter-rouge">cloudbuild.yaml</code> to specify different Docker files for them;</li>
  <li>Experiment with prompting.</li>
</ul>]]></content><author><name>Yury Kashnitsky</name><email>yury.kashnitsky@gmail.com</email></author><summary type="html"><![CDATA[Gemini is pretty good with YouTube analysis. You can just drop a youtube link into gemini.google.com and ask to provide a summary. Here is an example for https://youtu.be/jCTvblRXnzg – a brief, meme-heavy overview of Google’s AlphaEvolve:]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220250519-youtube-summarizer-cloud-run/teaser.png%22%7D" /><media:content medium="image" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220250519-youtube-summarizer-cloud-run/teaser.png%22%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">48 interviews away from a Google offer</title><link href="https://yorko.github.io/2024/interviews-23-24/" rel="alternate" type="text/html" title="48 interviews away from a Google offer" /><published>2024-09-30T00:00:00+00:00</published><updated>2024-09-30T00:00:00+00:00</updated><id>https://yorko.github.io/2024/interviews-23-24</id><content type="html" xml:base="https://yorko.github.io/2024/interviews-23-24/"><![CDATA[<p>I’ve been looking for Applied ML scientist positions for quite a while, got rejections from 16 companies (mostly, bigtech and startups) and 2 offers. In an earlier post, I shared some helpful resources, here I’ll share some high-level stats.</p>

<p>Breakdown by interview type:</p>

<ul>
  <li>Behavioral - 13.5</li>
  <li>Coding - 8.5</li>
  <li>ML breadth - 6</li>
  <li>ML depth - 5</li>
  <li>ML coding - 4</li>
  <li>Research presentation - 4</li>
  <li>ML system design - 3.5</li>
  <li>Take-home assignment - 3</li>
  <li>System design - 0.5</li>
</ul>

<p>I was positively surprised that coding tasks were generally easy (think of Leetcode Easy mode), and I looked for senior+ positions, hence som many behaviorals. Also, for Applied ML Scientist roles you see a skew towards ML interviews and almost no system design interviews.</p>

<p>Interview leads (how did I get to the 1st interviews):</p>

<ul>
  <li>Referral - 7</li>
  <li>Cold application - 4</li>
  <li>Reached out directly to the Hiring Manager - 4</li>
  <li>Recruiter/HM reaching out - 3</li>
  <li>Data Science communities - 2</li>
</ul>

<h3 id="most-common-questions">Most common questions</h3>

<p>I didn’t record each end every one of course, but these stood out:</p>

<ul>
  <li>p-value. Learn it by heart but also understand what it means</li>
  <li>in NLP-style ML breadth interviews, they all want you to explain the transformer or attention architecture. I made a post on how I take these questions</li>
  <li>Amazon is notoriously famous for puzzling behavioral questions like “tell me about a time you used data to adapt your strategy”. Other companies and startups would rather ask you to describe successes, failures/conflicts, i.e. more expected stuff</li>
  <li>I was never (exactly 0 times) challenged to describe my weaknesses. Not sure if the importance of this question is overrated, just my experience</li>
</ul>

<h3 id="interviews">Interviews</h3>

<p>Here are some highlights.</p>

<h4 id="apple-music-ml-researcher-for-recommender-systems-london">Apple Music, ML Researcher for Recommender Systems, London</h4>

<p>Got rejected after the first interview. They ended up hiring my former colleague who already lives in London and has way more experience with recommendation systems. Fair enough.</p>

<h4 id="aiforia-ml-team-lead--founded-by-folks-from-yandex-and-sber-devices-now-in-cyprus">Aiforia, ML Team lead – founded by folks from Yandex and Sber Devices, now in Cyprus</h4>

<p>Passed the HM interview (mix of behavioral/technical).  Then a Kaggle grandmaster grilled me on “ML fundamentals.”  Felt like light trolling tbh, haven’t seen such tricky questions in ages. Some even started with “I don’t know the answer to this one myself” lol.  Anyway, they were looking for experience with voice tech, which I don’t have at all. So no hard feelings, didn’t make the cut.</p>

<h4 id="replika">Replika</h4>

<p>Saw their post in Vas3k club that they’re looking for frontend devs, but you can reach out anyway. I wrote to the CTO, we chatted. Not exactly a perfect match, they needed researchers with a strong engineering focus.</p>

<p>Lesson learned: no matter how much I want to emphasize Applied Science, I shouldn’t use phrases like “messing around with configs” 🙂</p>

<h4 id="nvidia-senior-applied-scientist">Nvidia, Senior Applied Scientist</h4>

<p>Pain and humiliation, they absolutely smashed me. I didn’t just fail, but failed miserably.</p>

<p>Right off the bat, it was like “So, you think you’re hot stuff?” What do you use for DPO? How have you done distributed model parallel? No? Only DDP? Ah, haven’t touched 70b models? The interviewer was very polite, but that was the vibe.</p>

<p>Then it was okay. Transformers, NLP, all that jazz. Almost everyone asks about the transformer architecture.  Dude was dropping references to papers like Retro left and right, clearly well-read. But I think I held my own in the conversation.</p>

<p>I crumbled at the very first engineering question. What’s the difference between storing variables on the stack and the heap? And how is that related to local/global variables? I didn’t just forget, I don’t think I ever even learned this. It took me two tries to even understand the question. The most I could mumble was something about the stack appearing during recursion. The peak of the interview was question #24:</p>

<div style="text-align:center"><img src="/images/20230930-48-interviews-to-google/so_have_you_done_programming.jpg" width="500px" /></div>

<p>And algorithms: the “8 queens” problem. Classic, 101, according to the interviewer. Didn’t have to write code, just describe the solution. But I started rambling about dynamic programming, then backtracking.  At least I correctly estimated the factorial complexity, but I still couldn’t clearly describe the solution with DFS. Thought it was a simple problem, basic stuff, but it’s actually a <a href="https://leetcode.com/problems/n-queens/description/">hard</a> one.</p>

<p>NVIDIA is looking for rock stars, strong in both research and engineering. They can afford it, the job description for Senior Applied Scientist has a salary range for the US of 180-333k, and that’s just the base. And their stock is through the roof. So it’s all good.</p>

<div style="text-align:center"><img src="/images/20230930-48-interviews-to-google/interview_embarrassment.jpg" width="500px" /></div>

<h4 id="cohere-member-of-technical-staff">Cohere, Member of Technical Staff</h4>

<p>Really liked these guys, the interviews were super reasonable.  Had a small coding test to optimize some Python code (got the same problem that a Kaggle grandmaster gave me before, so that interview wasn’t a waste after all, heh). Instead of LeetCode, they had ML coding - I had to implement sampling from a simplified LLM decoder (greedy, top-k, top-p). ML system design was literally about an LLM evaluation system that Cohere is currently working on. I bet they get a lot of ideas from candidates :) And a paper review of my choice, also about LLM evaluation.  In the final round, I had a behavioral interview with the big boss, and there was no spark. Didn’t get real feedback, just some excuse like “lacked the level of adaptability and speedy execution.”</p>

<h4 id="snorkel-ai-staff-applied-nlp-scientist">Snorkel AI, Staff Applied NLP Scientist</h4>

<p>I got really emotionally attached to this option (don’t do that before you have an offer). It’s a startup founded by 5 PhDs from Stanford, growing rapidly, a unicorn.  They have a pretty coherent vision: companies don’t need monsters with 1.8T parameters, they need specific models for their domain, fine-tuned on their data (YC agrees, <a href="https://www.ycombinator.com/rfs">“Request for startups”</a>: small fine-tuned models as an alternative to giant generic ones). Snorkel is all about programmatic data labeling, which saves experts time on labeling, and also uses LLMs for soft labels. Plus distillation/quantization - you get small and powerful models, Snorkel’s blog is full of such stories (<a href="https://snorkel.ai/better-not-bigger-how-to-get-gpt-3-quality-at-0-1-the-cost/">example</a>).</p>

<p>The interviews here were also very reasonable: behavioral first, then ML coding, ML system design, and a presentation about a research project. In that one I didn’t show sufficient tech expertise but rather focused on leadership. I should have clarified with the recruiter which <a href="https://staffeng.com/guides/staff-archetypes/">staff archetype</a> they need.</p>

<h4 id="llm-startup-vp-ai-offer">LLM Startup, VP AI (offer)</h4>

<p>No-name startup, pre-seed, but the position is VP, I could have joined as the 6th employee, with a whole 1% of the company. Tempting, but I’m a coward and not doing a startup.</p>

<p>The interview was quite unusual - they sent me a doc in advance with ~20 questions about everything under the sun:</p>

<ul>
  <li>“We’re an LLM startup, why won’t the next OpenAI update kill us?”</li>
  <li>“Hallucinations are critical in our business, what are we going to do about them?”</li>
  <li>“How will we integrate the research department into the company structure?”.</li>
  <li>And even this: “Everyone is chasing GPUs now; maybe we’ll be smarter and look at new chips?”.</li>
</ul>

<p>For all the questions, I was asked to share my vision. Perhaps the most interesting interview. Initially, the CTO wanted to spend half an hour on LeetCode, but during the conversation, he was like, “okay, let’s not waste time, the conversation is going well. High-bandwidth conversation.”</p>

<h4 id="google-cloud-genai-field-solutions-architect-offer">Google Cloud, GenAI Field Solutions Architect (offer)</h4>

<p>I got into Google through a referral, finally they didn’t ignore me. Ironically, I was referred by a girl I helped leave Google to join Mistral.</p>

<p>Google has gradually converged to a format of 4 interviews (it used to be 15-20). I had the following rounds:</p>

<ul>
  <li>Leetcode + system design</li>
  <li>Role-related knowledge</li>
  <li>Leadership &amp; googleyness</li>
  <li>General Cognitive Ability</li>
  <li>“casual” conversation with the manager</li>
</ul>

<p>In the first round, the leetcode seemed easy and the design seemed difficult. I studied the design thoroughly (speaking of interview success being 50% effort and 50% luck, I haven’t prepared this long for any other company). With big tech, you can ask for a couple of weeks to prepare, usually they are okay with this. And the mocks were very helpful, especially considering that I had never done a design interview before.</p>

<p>Role-related knowledge was about LLMs and consulting, there were a lot of questions about how to describe LLMs to clients, top managers, engineers. The technical questions didn’t seem difficult (the “Generative AI with LLMs” <a href="https://t.me/new_yorko_times/160">course</a> and my own experience with LLMs were enough), but for the business acumen and consulting questions, practice with business cases, like they do in big3, wouldn’t hurt.</p>

<p>Leadership &amp; googleyness is basically behavioral. Even though I mentor myself, I did 4 mocks, learning what exactly they want to hear in interviews for staff positions at Google. This was incredibly helpful.  As a result, I pretty thoroughly reworked my story bank.  Thankfully, there were no tricky, Amazon-style questions like “tell me how you used data to modify your strategy”. Instead, it was more or less clear from the question what leadership qualities were being discussed and which of my stories to tell.</p>

<p>General Cognitive Ability is open-ended questions like “a friend opened a chocolate store, advise him on a business plan.” There is a clear framework here, easy to practice. <a href="https://www.youtube.com/@jeffhsipepi">This</a> YouTube channel by a former Google HR Jeff Sipe helped me a lot (there’s a whole playlist about negotiations as well). I also took a consultation with a small mock, where I was advised to speak more slowly.</p>

<p>And the “casual” conversation with the manager - not casual at all, should be treated as behavioral. You can chat about life later, once they hire you, in the interview they look at signals. I prepared my strongest stories treating this as a behavioral interview.</p>

<p>Overall, I estimate the contribution of behavioral to be about 80%. Yeah, I didn’t expect that could happen with Google. But this is a position in the Sales track, not SWE, I will have to communicate a lot with clients and top managers, so that’s the focus.</p>

<h3 id="conclusions">Conclusions</h3>

<p>Now the most important part. A long job search is mostly about your mental health and psychological stamina. I’d highlight the following considerations:</p>

<ul>
  <li>it’s a marathon. Reserve 1 year for your job hunt. If done faster, great; otherwise, keep going</li>
  <li>it’s a controlled lottery. Not just lottery (otherwise, you’ve got no lever to change anything) but a controlled one: you can improve your odds of winning by working hard but still treat this as a lottery</li>
  <li>respect your mental health. Just do what helps you: sports/yoga/art/etc. Don’t do leetcode at 3am if you stand up at 7am</li>
  <li>don’t get emotionally attached to any position before you get an offer. I did that a couple of times, and this only makes getting a rejection harder</li>
  <li>don’t overthink it. Many people tend to think it’s smth wrong with them or they suck or smth. No. The actual reason for rejection might be just location or another better candidate. One bigtech company ditched me after the 1st tech round, and turned out they hired my ex-colleague who’s based in London and has more experience in recommender systems. And that’s fine, I wouldn’t have benefited from hours of self-reflection and wondering “why the heck did they ditch me?”. So make constructive conclusions from your interview experience and move one.</li>
</ul>

<p>Good luck!</p>]]></content><author><name>Yury Kashnitsky</name><email>yury.kashnitsky@gmail.com</email></author><summary type="html"><![CDATA[I’ve been looking for Applied ML scientist positions for quite a while, got rejections from 16 companies (mostly, bigtech and startups) and 2 offers. In an earlier post, I shared some helpful resources, here I’ll share some high-level stats.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230930-48-interviews-to-google/teaser.png%22%7D" /><media:content medium="image" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230930-48-interviews-to-google/teaser.png%22%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">[EN/RU] To Ph.D. or not to Ph.D.</title><link href="https://yorko.github.io/2024/to-phd-or-not-to-phd/" rel="alternate" type="text/html" title="[EN/RU] To Ph.D. or not to Ph.D." /><published>2024-04-12T00:00:00+00:00</published><updated>2024-04-12T00:00:00+00:00</updated><id>https://yorko.github.io/2024/to-phd-or-not-to-phd</id><content type="html" xml:base="https://yorko.github.io/2024/to-phd-or-not-to-phd/"><![CDATA[<p><em>In this post I collected my own subjective pros and cons of pursuing a Ph.D. degree. If you scroll down, you’ll find a richer version of this post in Russian, wilth all jokes and puns.</em></p>

<hr />

<p>That’s an eternal debate, and there are quite a few “Ph.D. survival guides” already published (e.g., <a href="https://karpathy.github.io/2016/09/07/phd/">the one by Andrej Karpathy</a> or <a href="https://www.ruder.io/10-tips-for-research-and-a-phd/">“10 Tips for Research and a PhD”</a> by Sebastian Ruder; beware of the survivorship bias btw). Still, I wanted to share my view, even though I did a Ph.D. in Russia and the European/US/etc experience might be somewhat different.</p>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/buenos_aires.jpg" width="500px" /></div>
<p><em>As a prelude, one of the widest avenues in the world, in Buenos-Aires, where I attended IJCAI 2015 and watched DeepMind present Atari RL for the first time. And of course, a genuine Argentinian steak is a life-time gastronomical experience.</em></p>

<h1 id="pros">Pros</h1>

<h2 id="1-career-boost">1. Career boost</h2>

<p>Even though, as the saying goes, “Ph.D. is for people who love ideas more than money,” still, career-wise, it can be a great time investment. I personally enjoy R&amp;D, for me it’s a sweet spot between industry and academia. Some of the most seducing positions I’ve seen require a Ph.D. as a minimal req (though often can be substituted with a master’s + 5-7 years of industrial experience). Another consideration for me is that a Ph.D. degree might help me when I’m 70+. Teaching calculus in a university is not a bad plan for me, a nerd.</p>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/who_love_ideas.jpg" width="500px" /></div>
<p><em>“PhD is for people who love ideas more than money”.</em></p>

<h2 id="2-freedom-of-creation">2. Freedom of creation</h2>

<p>After I finished graduate school but had not yet defended, I went to work as a DS for an IT giant. For the dissertation, I still had to prove one small theorem, and then I realized how blinkered my brain was and how much more difficult it was to just sit down and think calmly. But when you are in graduate school full-time, ideas come to your mind all the time despite the chaos.</p>

<h2 id="3-more-free-time-kinda">3. More free time (kinda)</h2>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/saturdays.jpg" width="500px" /></div>
<p><em>Source: <a href="https://phdcomics.com/">https://phdcomics.com</a>.</em></p>

<p>Don’t get me wrong, academia ain’t all rainbows and beers during lunch breaks. Some research institutes might operate like that, but my Ph.D. was a serious full-time endeavor. My advisor would challenge me to aim at top-tier ML conferences. Add here grant proposals, conference deadlines and teaching – and your Ph.D. turns into a hectic activity, just like an industrial position. Despite all that, grad school felt like more free time compared to the industry, especially for networking and side projects. It was during grad school that I started teaching ML/DS at universities and corps, and eventually kicked off mlcourse ai. You could argue I could’ve spent that time solely on research and aimed to be part of that elusive 0.1% of people who made a difference with their Ph.D. thesis, but my excuse is that I’m too dumb for that.</p>

<h2 id="4-constantly-learning-new-stuff">4. Constantly learning new stuff</h2>

<p>One day you’re studying GANs and game theory, then you ace advanced statistics; in a week you realize that you need to refresh some concepts from graph theory, then maybe group theory. It’s just constant learning; you are always catching up with all these smart folks around you. This might be stressful, but in my case, that only motivated me.</p>

<h2 id="5-developing-industry-relevant-skills">5. Developing industry-relevant skills</h2>

<p>With CS/DS/ML, we are lucky. As a Ph.D. candidate, I learned a lot about all these things relevant to IT roles in the industry: algorithms, databases, social network analysis, etc. Actually, I spent the whole 1st year taking 4 Coursera courses at a time to catch up and fill the gaps. Then, you’d see yesterday’s Ph.D. candidates landing jobs at Amazon/Meta/etc.</p>

<h1 id="cons">Cons</h1>

<p>Certainly, there are many downsides to pursuing a Ph.D. degree, even apart from “spending these 4-6 years at Google instead”.</p>

<h2 id="1-feeling-like-a-loser">1. Feeling like a loser</h2>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/existentialcrisis.png" width="500px" /></div>
<p><em>Source: <a href="https://phdcomics.com/">https://phdcomics.com</a>.</em></p>

<p>Doing Ph.D., you constantly feel that you’re stupid, your research sucks, and the thesis itself is worth nothing (the latter is true, only your papers matter). Even at grad seminars, where Ph.D. candidates informally shared what we were working on, I couldn’t shake off the feeling that we were all smart losers. Moreover, I felt that everyone else felt the same. Probably that’s even worse for postdocs, with a never-ending need to beg for money. As one of my colleagues put it, “You are never happy doing your Ph.D.”</p>

<h2 id="2-bad-working-culture">2. Bad working culture</h2>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/final_doc.png" width="500px" /></div>
<p><em>Source: <a href="https://phdcomics.com/">https://phdcomics.com</a>.</em></p>

<p>The downside of a flexible schedule in your grad school is that you often work on the weekends and even holidays. But the biggest issue isn’t even time management. In academia, I saw many smart people, but not a single organized one that I could learn from. It’s not only about the infamous messy “academic” style of coding. It’s rather this “publish or perish” pressure, deadlines, multi-processing, overlapping projects, etc, that all eventually lead to sloppiness and bad working culture.</p>

<h2 id="3-bureaucracy">3. Bureaucracy</h2>

<p>That’s probably more relevant for Russia and my old-fashioned Ph.D. defense process, so I won’t be elaborating. tl;dr: in my case, that was a damned shitload of bureaucracy.</p>

<h2 id="4-grants">4. Grants</h2>

<p>That’s the sole reason I’ll never get back to academia (well, apart from me being too dumb to produce revolutionary ideas). All these grant applications and reports just drain you and make you feel miserable (especially grant reports, in the pre-GPT era). Searching for grants, writing applications and reports, bluffing about the results – this all makes you (or at least made me) really unhappy. The rumor is one of the ads that makes professors or postdocs lure into the industry sounds like: “Think about it! You won’t have to write grant applications. Ever. E-ver.”</p>

<h2 id="5-the-gestalt-monster">5. The Gestalt monster</h2>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/geschtalt.jpg" width="500px" /></div>
<p><em>Source: <a href="https://phdcomics.com/">https://phdcomics.com</a>.</em></p>

<p>Your Ph.D. thesis is a huge gestalt. Anyone who’s been through this knows: the period from pre-defense to the long-awaited defense is incredibly stressful. Wherever you are, whatever you’re doing, part of your brain is occupied by the thought that you should be writing your thesis. This “tumor” is eating part of your brain. And even though the defense brings a tremendous feel of relief, I wouldn’t go the same route again.</p>

<h1 id="conclusion">Conclusion</h1>

<p>Despite all the downsides, I want to end on a positive note: Ph.D. is a cool experience. While I wouldn’t call it the happiest period of my life, I don’t regret it. It’s like an adventure and an extension of your childhood: when else would you go for it if not at 23? And a bit of ageism as a wrap-up: my advice is to go to grad school when you’re young; once you’re loaded with work, family duties, and other responsibilities, it’s too hard to break out of that local minimum.</p>

<hr />

<h1 id="a-richer-version-of-this-post-in-russian">A (richer) version of this post in Russian</h1>

<p><em>The corresponding <a href="https://t.me/new_yorko_times/51">Telegram post</a> and <a href="https://vas3k.club/post/18133">Vastrik post</a>.</em></p>

<h1 id="субъективные-плюсы-и-минусы-аспирантуры">Субъективные плюсы и минусы аспирантуры</h1>

<p>Давно хотелось порассуждать на тему, стоит ли вообще гоняться за степенями кандидата наук/Ph.D. Много копий уже сломано в спорах, а зачем оно вообще надо, не лучше ли все то же время проработать. Не буду гнаться за объективными доводами, опишу чисто свой субъективный опыт, какие плюсы и минусы увидел.</p>

<p>Надо оговориться, что я выбрал аспирантуру ВШЭ, в которой в 2013 можно было зарабатывать почти так же, как в индустрии на джун/миддл позициях, а также летать на конференции по Европе и даже разок в Аргентину. Также я получил к.т.н. в старом формате с диссоветами и секретаршами, так что опыт Ph.D. в Европе/Америке может отличаться от моего.</p>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/buenos_aires.jpg" width="500px" /></div>
<p><em>Возможно, широчайшая улица мире – в Буэнос-Айресе. Жаль только, далеко от столицы не дайют отъехать, все по датам конференции. Но настоящий аргентинский стейк – это впечатление!</em></p>

<p>Я нисколько не жалею, что пошел в аспирантуру, даже несмотря на то, что академия – для того 0.1% людей (навскидку) способных порождать реально стоящие идеи.</p>

<h1 id="плюсы">Плюсы</h1>
<h2 id="плюс-1-карьерный">Плюс 1. Карьерный</h2>

<p>Вообще “Ph.D. is for people who love ideas more than money”, но я первым плюсом назову именно карьерный. Я долго присматривался, что вообще люблю делать в жизни, и на данный момент это R&amp;D (позиции типа applied scientist). C одной стороны, скучно в чистой индустрии делать однотипные проекты, с другой – чистый рисеч пилить сложно, надо быть умным и усидчивым. А вот прикладной рисеч в корпорации, без требования публиковаться – это что надо, sweet spot. То есть посмотрел, что там state-of-the-art, почитал статьи, попробовал применить, да так чтоб бизнес-импакт тоже был. Еще заодно с кем-то запартнериться, чтоб не самому страдать, а ребята из академии тоже помогали. Классно же. И тут я вижу профит от степени, людям в академии она говорит, что “этот парень свой”, а менеджменту в корпорациях она говорит о том, что “это парень умеет по-новому взглянуть на вещи”. Неудивительно, что для некоторых R&amp;D позиций на входе требование степени Ph.D.</p>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/who_love_ideas.jpg" width="500px" /></div>
<p><em>“PhD is for people who love ideas more than money”.</em></p>

<p>У меня есть еще вполне определённое нËрдическое видение своей старости, куда более четкое, чем своих ближайших 20-ти лет. Я буду преподом в универе, проводить семинары по матану – преисполнившись в этом вашем корпоративном мире, вернусь к бесконечно-вечному и прекрасному. Да, буду старпёром, чем плохо, надеюсь, доработаю до смерти, не маразмируя. Буду жить матаном, общаться со студентами и студентками, прививать им любовь к математике и кайфовать. И вот для такой пенсии Ph.D. очень даже к стати.</p>

<h2 id="плюс-2-свобода-творчества">Плюс 2. Свобода творчества</h2>

<p>Уже после того, как я закончил аспирантуру, но еще не защитился, я пошел работать программистом-исследователем в мэйл. По диссеру еще надо было доказать одну небольшую теорему, и тут-то я понял, насколько мозг уже зашорен, насколько сложнее просто сесть и спокойно подумать. А вот когда ты full-time в аспирантуре, идеи приходят в голову постоянно, даже несмотря на то, что суеты тоже немало.</p>

<h2 id="плюс-3-свободного-времени-больше">Плюс 3. Свободного времени больше</h2>

<p>Не стоит думать, что в академии вольготно и можно водку пить в обеденный перерыв. Где-то в НИИ, конечно, именно так и делают, но аспирантура ВШЭ (как и нормальный западный Ph.D.) – это именно full-time деятельность. Мой научник челенджил меня публиковаться на топ-конференциях, всячески подгонял и не давал останавливаться на двух статьях в ВАКовском вестнике провинциального технического института. И уж точно в академии много своих приколов с заявками на гранты и дедлайнами конференций, которые порой превращают твои выходные в будни (в случае моей жены такой черной дырой, сжирающей все время и мыслетопливо, было преподавание).</p>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/saturdays.jpg" width="500px" /></div>
<p><em>Источник: <a href="https://phdcomics.com/">https://phdcomics.com</a>.</em></p>

<p>При всем этом мне показалось, что все равно в аспирантуре свободного времени больше, чем в индустрии, на примере мэйла. Больше времени на нетворк и сторонние проекты. Именно во время аспирантуры я начал активно преподавать ML/DS в вузе и корпорациях и в итоге заложил основы http://mlcourse.ai (тут можно возразить, что я мог бы этого не делать, а больше времени вложить собственно в рисеч и попасть в тот 0.1% людей, породивших стоящие идеи, но я отговариваюсь тем, что туповат для этого).</p>

<h2 id="плюс-4-желание-постоянно-изучать-что-то-новое">Плюс 4. Желание постоянно изучать что-то новое</h2>

<p>Сегодня читаешь про GANs и их связь с теорией игр, ботаешь теорию игр, завтра по-нормальному за статистику берешься, послезавтра – графы, на следующей неделе – теория групп. Через месяц начинаешь преподавать алгоритмы, чтоб самому в них лучше разобраться. И так постоянно. Кажется, что вокруг все умнее тебя и надо догонять. Кого-то это точно вогнало в стресс, но мне было интересно.</p>

<h2 id="плюс-5-релевантные-для-индустрии-навыки">Плюс 5. Релевантные для индустрии навыки</h2>

<p>Каждый раз хочется уточнить, что все субъективно. Есть аспиранты, глубоко забуривающиеся в узкие области науки, и им не очень повезло с обобщением получаемых навыков на индустрию, а вот нам с Data Science и Machine Learning повезло куда больше. Я почти весь первый год ботал по 3-4 курса за раз на курсере, закрывал пробелы после слабой магистратуры – алгоритмы и БД, дискретка, анализ соцсетей – все это не в молоко, а вполне релевантно если не абстрактной “индустрии”, то хотя бы собеседованиям. Потом я преподавал студентам Python и машинное обучение и, не дожидаясь защиты, без особых проблем устроился программистом-исследователем в упомянутый мэйл (хотя до аспы греб на такой галере, что и вспоминать неохота, там особого ценных хард скиллов я не приобрел). Хотя всегда надо оставить вариант скрытой переменной – упертости: кто упорот в хорошем смысле, тот и диссер пойдет писать, и литкод для собеса осилит.</p>

<h1 id="минусы">Минусы</h1>
<p>И конечно, минусов тоже море. Иначе про Ph.D. не было бы столько срачей и мемов. Опущу очевидные (”лучше за 4 года в Яндексе бы вырос!”) и опишу те, что сам испытал.</p>

<h2 id="минус-1-ощущение-лузерства">Минус 1. Ощущение лузерства</h2>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/existentialcrisis.png" width="500px" /></div>
<p><em>Источник: <a href="https://phdcomics.com/">https://phdcomics.com</a>.</em></p>

<p>Постоянное ощущение, что ты тупой и рисеч твой ничего не стоит, а диссер пишется в стол (а если защищаться по классической схеме, он реально пишется в стол, важны только крутые статьи, но не диссер). Даже в аспирантуре модной (на 2013 год) Вышки все равно не избежишь “атмосферы НИИ”, когда дойдет до защиты – чай “Корона российской империи”, деды в свитерах, гранты РФФИ. Какие-то никому не нужные проекты тащатся и все делают вид, что они реально что-то значат. Даже на аспирантских семинарах, где мы делились, кто над чем работает, меня не покидало ощущение, что собрались умные лузеры. Более того, мне казалось, что у других ощущение ровно то же было. Предположу, что на позициях postdoc и выше ощущение лузерства еще и усугубляется необходимостью “стоять с протянутой рукой”, то есть выбивать деньги на свои исследования. В-общем, мне запомнились слова бодренького француза, коллеги в Вышке: ”You are never happy doing your Ph.D”.</p>

<h2 id="минус-2-плохая-культура-работы">Минус 2. Плохая культура работы</h2>

<p>Я уже говорил, что в целом у меня было в аспирантуре было больше свободного времени, но этот свободный график работы часто подразумевал дедлайны в выходные и даже праздники, а рабочая суббота в академии – по сути почти принятая норма.</p>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/final_doc.png" width="500px" /></div>
<p><em>Источник: <a href="https://phdcomics.com/">https://phdcomics.com</a>.</em></p>

<p>Но даже не time management самая главная проблема. В академии я видел много умных людей, но ни одного – организованного, так чтоб было чему поучиться. И речь даже не про пресловутые коммиты в мастер и качество кода академиков. Просто смотришь на коллег вокруг и видишь, что у всех жопы горят, проекты идут внахлест, а организация работы желает лучшего. И конечно, из-за этого халтуры тоже немало. Парадигма ”publish or perish” и гонка за Хиршем тут совсем не помогают.</p>

<h2 id="минус-3-бюрократия">Минус 3. Бюрократия</h2>

<p>Я защищался в старом формате, с диссоветами и морем лишних бумаг – куча времени ушла просто в молоко, на бюрократию. Причем за этим словом стоит не только подписание бумажек у секретарши, которая “сейчас, ой просите, пьет чай с коллегой”, но и удовлетворение всяческих хотелок рецензентов и оппонентов, это может занимать месяцы, а то и годы. Кто хоть раз испытывал фрустрацию от отвергнутой статьи с токсичными комментариями рецензентов – вот с диссером надо умножить в 10 раз.
Не забуду случай с одним лектором. Я пошел на его занятие по оценке сложности алгоритмов и просто испытал кайф и от подачи материала (речь шла про умножение матриц, чудесное откровение Штрассена, придумавшего, как эту базовую операцию ускорить, и современные алгоритмы), и от самого лектора, казалось даже, что у меня с ним сложилась химия: захотелось делиться своими алгоритмами, обсуждать, как их можно ускорить, и вообще иметь в лице такого лектора старшего товарища или даже наставника. И тут моя коллега по аспирантуре проходит предзащиту, приходят ваковские чиновники и начинают придираться просто к каждому слову и каждой формулировке. Самый активный среди них – тот самый лектор. Причем вел себя он так, что цензурно и не опишешь. И вроде понятно, что лектор так переобулся из лучших побуждений – помочь аспирантке с диссером, но все равно думаешь “нах так жить”. И в моменте, когда тебе самому приходится проходить все эти ваковские квесты, бюрократия тебя чуть ли не ломает, надо просто всю волю в кулак собрать, чтоб не забросить диссер после предзащиты.</p>

<p>Справедливости ради, в ВШЭ процедура защиты стала намного больше походить на зарубежную с Ph.D., и лишней бюрократии стало меньше. Но я еще успел отхватить совка.</p>

<h2 id="минус-4-гранты">Минус 4. Гранты</h2>

<p>Уже местами упоминалось выше, но все же стоит отдельным минусом обозначить. Я уверен, что никогда не вернусь в академию и даже не потому, что туповат для рисеча. Заявки на гранты и отчеты по ним… ммм… вот этого одного достаточно. Километры каких-то слабо связанных между собой абзацев (жаль, не было chatGPT в те времена), бахвальство, подача вещей в нужном свете – и все чтоб выбить 700к рублей на всех на год, еще и с 35% налогом. В Европе, конечно, есть реально большие гранты, которые могут спонсировать всю лабораторию много лет. Но все равно поиск грантов, заявки, отчеты кого угодно сделают несчастным, даже в случае больших грантов. Слышал, на конфах солидных профессоров так и переманивают в индустрию: “Подумайте! это ж вам не надо будет писать заявки на гранты. Ни-ког-да”.</p>

<h2 id="минус-5-гештальтище">Минус 5. Гештальтище</h2>

<div style="text-align:center"><img src="/images/20230126-to-phd-or-not-to-phd/geschtalt.jpg" width="500px" /></div>
<p><em>Источник: <a href="https://phdcomics.com/">https://phdcomics.com</a>.</em></p>

<p>Само требование затащить таки этот диссер – колоссальный гештальт. Знакомо всем, кто прошел этот путь: период от предзащит до долгожданной защиты – очень давящий. Где бы ты ни был, что б ты ни делал, часть твоего мозга оккупирована мыслью, что ты должен писать диссер. Ощущение, что без успешной защиты и получения степени ты просто потерял 3-4 года – очень гнетущее. И даже несмотря на последующее потрясающее ощущение “горы, свалившейся с плеч” после успешной защиты, я б больше не хотел повторять такой опыт.</p>

<h2 id="заключение">Заключение</h2>

<p>Несмотря на все минусы, хочется закончить на позитивной ноте (жена тут сбоку подсказывает: надо обязательно отметить, что если б не аспирантура, мы б не познакомились). Ph.D. – это классный опыт. Хоть это я б не назвал самым счастливым периодом своей жизни, но все же не жалею. Своего рода авантюра и “продление детства”, но когда как не в 23 года. Если б мне сейчас было снова 23, я б снова принял решение пойти в аспирантуру, только скорее на Ph.D. куда-то в крутое место типа Сингапура и заниматься какой-то более актуальной ML-темой. И идти в аспирантуру точно надо по молодости, потом как обрастешь работами, семьями и прочими обязанностями, уже все, из локального минимума не выскочишь.</p>]]></content><author><name>Yury Kashnitsky</name><email>yury.kashnitsky@gmail.com</email></author><summary type="html"><![CDATA[In this post I collected my own subjective pros and cons of pursuing a Ph.D. degree. If you scroll down, you’ll find a richer version of this post in Russian, wilth all jokes and puns.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230126-to-phd-or-not-to-phd/teaser.jpg%22%7D" /><media:content medium="image" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230126-to-phd-or-not-to-phd/teaser.jpg%22%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Math in a real project: scaling laws for near-duplicate papers</title><link href="https://yorko.github.io/2023/scaling-laws-near-dups/" rel="alternate" type="text/html" title="Math in a real project: scaling laws for near-duplicate papers" /><published>2023-10-20T00:00:00+00:00</published><updated>2023-10-20T00:00:00+00:00</updated><id>https://yorko.github.io/2023/scaling-laws-near-dups</id><content type="html" xml:base="https://yorko.github.io/2023/scaling-laws-near-dups/"><![CDATA[<p><em>In this post, I describe how graph theory popped up out of the blue in a real project.</em></p>

<hr />

<p>In <a href="https://yorko.github.io/2023/practical-near-dup-detection/">one of the latest posts</a> I described near-duplicate detection with LSH and how to run it in Python with Datasketch.</p>

<p>When you apply it to scientific papers or submitted manuscripts, you can spot various types of fraudulent behavior:</p>

<ul>
  <li>simultaneous submissions – when the authors send 2 or more manuscripts to the different journals at the same time;</li>
  <li>duplicate publications – cases when there is a pair of almost identical published papers</li>
  <li>“salami slicing” – when a long paper is split into several smaller ones, each published independently;</li>
  <li>“paper mills” – research misconduct of selling fraudulent papers, some of those can be spotted with near-dup detection algorithms.</li>
</ul>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/diff_potential_retraction.png" width="700px" />
</div>

<p><em><a href="https://draftable.com/compare/MXhdtSjXWEqg">Example</a> of a spotted potential retraction.</em></p>

<p>With my prototype, we first developed a near-duplicate detection solution at Elsevier and then collaborated with STM to <a href="https://www.stm-assoc.org/stm-integrity-hub-launches-new-research-integrity-tool/">roll it out</a> for all publishers.</p>

<p>Internally, at Elsevier, we measured that around 4% of manuscripts have a near-duplicate submitted earlier. Prior to scaling the algorithm to all journals by all major publishers, a reasonable question we asked was: <strong>As we increase the set of papers under consideration, how does the percentage of those having at least one near-duplicate change?</strong> Apparently, if we’d expect it to stay at 4% that’s one story, but if we expect to have 20% of near-dups, that’s a completely different story.</p>

<h2 id="mathematical-problem-formulation">Mathematical problem formulation</h2>

<p>Let a graph represent the relation “to be a near-duplicate”</p>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/graph.png" width="400px" />
</div>

<!--$$\Large p\left(\theta, x\right) = \theta^x \left(1 - \theta\right)^\left(1 - x\right), x \in \left\{0, 1\right\}$$
-->

<p>\(N\) – is the number of nodes, \(E\) – is the number of edges.</p>

<p>Interpretation:</p>

<ul>
  <li>node \(\leftrightarrow\) a title/abstract/full text of a manuscript/paper</li>
  <li>edge \(\leftrightarrow\) 2 titles/abstracts/full texts are near-duplicates</li>
  <li>connected node (with at least one edge) \(\leftrightarrow\) a title/abstract/full text has at least one near-duplicate</li>
  <li>isolated node (without edges) \(\leftrightarrow\) a title/abstract/full text is “unique”</li>
</ul>

<p>Let’s say Publisher 1 ran LSH and found \(\Large \alpha_1\) % of near-dups (connected nodes):</p>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/graph_publisher1.png" width="400px" />
</div>

<p>Now, Publisher 2 also ran LSH and found \(\Large \alpha_2\) % of near-dups (connected nodes):</p>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/graph_publisher2.png" width="400px" />
</div>

<p><strong>Question:</strong> What’s the new \(\Large \alpha\) % of connected nodes in the combined graph?</p>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/graph_publisher1_2.png" width="400px" />
</div>

<p>Note:</p>
<ul>
  <li>we are unlikely to ever check this experimentally;</li>
  <li>we are unlikely to estimate the share of edges between the 2 sets (near-dups across Publisher 1 and Publisher 2) as the parties do not share data.</li>
</ul>

<p><strong>Looking from another perspective:</strong></p>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/graph_isolated_nodes.png" width="400px" />
</div>

<ul>
  <li>What’s the % of <font color="red">isolated</font> nodes (“unique” papers) in a graph?</li>
  <li>How does this % grow in a growing graph?</li>
</ul>

<p><strong>Assumptions</strong></p>

<p>We assume a random graph model:</p>

<ul>
  <li>each pair of nodes is connected by an edge with a fixed probability \(\large \mathbb{p} \ll 1\)</li>
  <li>the fact that two nodes are connected is independent of other nodes/edges</li>
  <li>\(\large \mathbb{p}\) is independent of the number of nodes</li>
</ul>

<blockquote>
  <p><em>“All models are wrong but some are useful”</em> <br />
George Box, British statistician</p>
</blockquote>

<p><strong>Deriving the formula for # isolated nodes</strong></p>

<p>Let’s consider a graph with \(n-1\) nodes (black), and separately, the \(n\)-th node (<font color="green">green</font>) which is “added” to the graph.</p>

<p><em>Interpretation:</em> a growing set of papers.</p>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/growing_graph.png" width="400px" />
</div>

<p>The \(n\)-th node is connected to each one of the other nodes with a fixed probability \(\large \mathbb{p} \ll 1\).</p>

<p>The probability that it’s not connected to any of them (i.e. isolated):</p>

\[\large P_{iso} = {(1-\mathbb{p})}^{(n-1)}\]

<p>With  \(\large \mathbb{p} \ll 1\) and  \(\large n \gg  1\) we have:</p>

\[P_{iso} =  {(1-\mathbb{p})}^{(n-1)} \approx {(1-\mathbb{p})}^n = \sum_{k=1}^n {n \choose k} (-\mathbb{p})^k\]

\[= 1 - n\mathbb{p} + \frac{n(n-1)}{2}\mathbb{p}^2 - \frac{n(n-1)(n-2)}{6}\mathbb{p}^3 + \dots\]

\[\approx 1 - n\mathbb{p} + \frac{n^2}{2}\mathbb{p}^2 - \frac{n^3}{6}\mathbb{p}^3 + \dots\]

<font color="green">$$\Large = e^{-n\mathbb{p}}$$</font>

<p>We arrived to an <strong>exponential</strong> dependency of \(P_{iso}\) on \(n\).</p>

<p>\(P_{iso}\) is also the expected percentage of isolated nodes in the graph.</p>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/exp_func_sketch.png" width="400px" />
</div>

<p><em>Interpretation:</em> as the set of papers is growing, there’s a higher chance that any of the papers will see a near-duplicate, and thus the percentage of “unique” papers is dropping.</p>

<h2 id="experimental-results">Experimental results</h2>

<h3 id="700k-sciencedirect-abstracts">700k ScienceDirect abstracts</h3>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/sd_700k_abstracts_dup_perc_growth_20210505.png" width="400px" />
</div>
<p><em>Percent of unique abstracts (blue) and the corresponding prediction (orange).</em></p>

<p>For 700k ScienceDirect abstracts, we see ~95% of unique ones, 5% have \(\geq1\) near-duplicate.</p>

<p>Estimated \(\large \mathbb{p} = 7 \cdot 10^{-8}\)</p>

<p>Same + projected to 10 mln. papers</p>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/sd_700k_abstracts_dup_perc_growth_projection_20210505.png" width="500px" />
</div>
<p><em>Same as before but with the projection to a set of 10 mln. abstracts (orange)</em></p>

<p>Based on the estimated \(\large \mathbb{p} = 7 \cdot 10^{-8}\), the projection to 10 mln. papers gives \(\approx\) 50% near-dups (crazy!).</p>

<h3 id="46-mln-manuscript-titles-from-the-editorial-manager">4.6 mln. manuscript titles from the Editorial Manager</h3>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/em_dup_titles_growth_20210505.png" width="500px" />
</div>
<p><em>Percent of unique manuscript titles (blue) and the corresponding prediction (orange). The red line shows the share of manuscript titles with at least one near-duplicate.</em></p>

<p>For 4.6 mln. manuscripts from Editorial Manager, we see ~62.8% of unique ones, 37.2% have \(\geq1\) near-duplicate.</p>

<p>Estimated \(\large \mathbb{p} = 10^{-7}\). Very close! Now the previous projection doesn’t look crazy anymore.</p>

<h2 id="one-more-check-of-the-model-validity">One more check of the model validity</h2>

<p>According to the model, the number of edges is quadratic in the number of nodes:</p>

\[E = \mathbb{p} \cdot {n \choose 2} = \mathbb{p} \cdot \frac{n(n-1)}{2}\]

<p>Imagine a clique (fully-connected graph), each edge is then kept with probability \(\mathbb{p}\), hence this formula.</p>

<p>The scaling law of #edges (near-dups) vs. #nodes (titles/abstracts) is predicted well, though the coefficients are a bit off. The model can be adjusted for that.</p>

<div style="text-align:center">
<img src="/images/20231020-scaling-laws-near-dups/sd_700k_abstracts_and_EM_titles_num_edges_vs_num_nodes_20210505.png" width="800px" />
</div>

<h2 id="limitations-of-the-modelanalysis">Limitations of the model/analysis</h2>

<ul>
  <li>we ran LSH with titles &amp; abstracts, not with full texts</li>
  <li>LSH is probabilistic, it’s <em>recall</em> is &lt;100%, i.e. it finds not all of the near-dups (the actual recall is ~90% for titles, intractable to assess for full texts)</li>
  <li>model predictions are good qualitatively (the model explains the effects well) but a bit off quantitatively (a discrepancy for #edges vs. #nodes)</li>
</ul>

<h2 id="conclusion">Conclusion</h2>

<p>The mathematical model shows that the number of papers with at least one near-duplicate increases with the size of the collection. Hence, in a combined dataset of papers from multiple publishers, we’d expect to see a higher percentage of duplicated papers and, therefore, more cases of research misconduct.</p>]]></content><author><name>Yury Kashnitsky</name><email>yury.kashnitsky@gmail.com</email></author><summary type="html"><![CDATA[In this post, I describe how graph theory popped up out of the blue in a real project. In one of the latest posts I described near-duplicate detection with LSH and how to run it in Python with Datasketch. When you apply it to scientific papers or submitted manuscripts, you can spot various types of fraudulent behavior: simultaneous submissions – when the authors send 2 or more manuscripts to the different journals at the same time; duplicate publications – cases when there is a pair of almost identical published papers “salami slicing” – when a long paper is split into several smaller ones, each published independently; “paper mills” – research misconduct of selling fraudulent papers, some of those can be spotted with near-dup detection algorithms. Example of a spotted potential retraction. With my prototype, we first developed a near-duplicate detection solution at Elsevier and then collaborated with STM to roll it out for all publishers. Internally, at Elsevier, we measured that around 4% of manuscripts have a near-duplicate submitted earlier. Prior to scaling the algorithm to all journals by all major publishers, a reasonable question we asked was: As we increase the set of papers under consideration, how does the percentage of those having at least one near-duplicate change? Apparently, if we’d expect it to stay at 4% that’s one story, but if we expect to have 20% of near-dups, that’s a completely different story. Mathematical problem formulation Let a graph represent the relation “to be a near-duplicate” \(N\) – is the number of nodes, \(E\) – is the number of edges. Interpretation: node \(\leftrightarrow\) a title/abstract/full text of a manuscript/paper edge \(\leftrightarrow\) 2 titles/abstracts/full texts are near-duplicates connected node (with at least one edge) \(\leftrightarrow\) a title/abstract/full text has at least one near-duplicate isolated node (without edges) \(\leftrightarrow\) a title/abstract/full text is “unique” Let’s say Publisher 1 ran LSH and found \(\Large \alpha_1\) % of near-dups (connected nodes): Now, Publisher 2 also ran LSH and found \(\Large \alpha_2\) % of near-dups (connected nodes): Question: What’s the new \(\Large \alpha\) % of connected nodes in the combined graph? Note: we are unlikely to ever check this experimentally; we are unlikely to estimate the share of edges between the 2 sets (near-dups across Publisher 1 and Publisher 2) as the parties do not share data. Looking from another perspective: What’s the % of isolated nodes (“unique” papers) in a graph? How does this % grow in a growing graph? Assumptions We assume a random graph model: each pair of nodes is connected by an edge with a fixed probability \(\large \mathbb{p} \ll 1\) the fact that two nodes are connected is independent of other nodes/edges \(\large \mathbb{p}\) is independent of the number of nodes “All models are wrong but some are useful” George Box, British statistician Deriving the formula for # isolated nodes Let’s consider a graph with \(n-1\) nodes (black), and separately, the \(n\)-th node (green) which is “added” to the graph. Interpretation: a growing set of papers. The \(n\)-th node is connected to each one of the other nodes with a fixed probability \(\large \mathbb{p} \ll 1\). The probability that it’s not connected to any of them (i.e. isolated): \[\large P_{iso} = {(1-\mathbb{p})}^{(n-1)}\] With \(\large \mathbb{p} \ll 1\) and \(\large n \gg 1\) we have: \[P_{iso} = {(1-\mathbb{p})}^{(n-1)} \approx {(1-\mathbb{p})}^n = \sum_{k=1}^n {n \choose k} (-\mathbb{p})^k\] \[= 1 - n\mathbb{p} + \frac{n(n-1)}{2}\mathbb{p}^2 - \frac{n(n-1)(n-2)}{6}\mathbb{p}^3 + \dots\] \[\approx 1 - n\mathbb{p} + \frac{n^2}{2}\mathbb{p}^2 - \frac{n^3}{6}\mathbb{p}^3 + \dots\] $$\Large = e^{-n\mathbb{p}}$$ We arrived to an exponential dependency of \(P_{iso}\) on \(n\). \(P_{iso}\) is also the expected percentage of isolated nodes in the graph. Interpretation: as the set of papers is growing, there’s a higher chance that any of the papers will see a near-duplicate, and thus the percentage of “unique” papers is dropping. Experimental results 700k ScienceDirect abstracts Percent of unique abstracts (blue) and the corresponding prediction (orange). For 700k ScienceDirect abstracts, we see ~95% of unique ones, 5% have \(\geq1\) near-duplicate. Estimated \(\large \mathbb{p} = 7 \cdot 10^{-8}\) Same + projected to 10 mln. papers Same as before but with the projection to a set of 10 mln. abstracts (orange) Based on the estimated \(\large \mathbb{p} = 7 \cdot 10^{-8}\), the projection to 10 mln. papers gives \(\approx\) 50% near-dups (crazy!). 4.6 mln. manuscript titles from the Editorial Manager Percent of unique manuscript titles (blue) and the corresponding prediction (orange). The red line shows the share of manuscript titles with at least one near-duplicate. For 4.6 mln. manuscripts from Editorial Manager, we see ~62.8% of unique ones, 37.2% have \(\geq1\) near-duplicate. Estimated \(\large \mathbb{p} = 10^{-7}\). Very close! Now the previous projection doesn’t look crazy anymore. One more check of the model validity According to the model, the number of edges is quadratic in the number of nodes: \[E = \mathbb{p} \cdot {n \choose 2} = \mathbb{p} \cdot \frac{n(n-1)}{2}\] Imagine a clique (fully-connected graph), each edge is then kept with probability \(\mathbb{p}\), hence this formula. The scaling law of #edges (near-dups) vs. #nodes (titles/abstracts) is predicted well, though the coefficients are a bit off. The model can be adjusted for that. Limitations of the model/analysis we ran LSH with titles &amp; abstracts, not with full texts LSH is probabilistic, it’s recall is &lt;100%, i.e. it finds not all of the near-dups (the actual recall is ~90% for titles, intractable to assess for full texts) model predictions are good qualitatively (the model explains the effects well) but a bit off quantitatively (a discrepancy for #edges vs. #nodes) Conclusion The mathematical model shows that the number of papers with at least one near-duplicate increases with the size of the collection. Hence, in a combined dataset of papers from multiple publishers, we’d expect to see a higher percentage of duplicated papers and, therefore, more cases of research misconduct.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220231020-scaling-laws-near-dups/teaser.png%22%7D" /><media:content medium="image" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220231020-scaling-laws-near-dups/teaser.png%22%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">[RU] Inburgering in the Netherlands</title><link href="https://yorko.github.io/2023/inburgering/" rel="alternate" type="text/html" title="[RU] Inburgering in the Netherlands" /><published>2023-07-26T00:00:00+00:00</published><updated>2023-07-26T00:00:00+00:00</updated><id>https://yorko.github.io/2023/inburgering</id><content type="html" xml:base="https://yorko.github.io/2023/inburgering/"><![CDATA[<p><em>In this post, I describe my experience in passing Dutch naturalization exams.</em></p>

<hr />

<p>Коль уж и этот блог, и <a href="https://t.me/new_yorko_times">телеграм-канал</a> называются New Yorko Times, буду больше рассказывать про свои “новые времена”. Я тут давеча сдал все экзамены, без пяти минут (и одного года) голландец, расскажу про процесс. По-нидерландски он называется inburgering (инбурхеринх).</p>

<p>Немного контекста: в Нидерландах действует программа налоговых поблажек для высококвалифицированных мигрантов (kennismigrant), 30% дохода не облагаются налогом в течение первых 5 лет. Это называется 30% ruling, соотв. галка есть в калькуляторах доходов, например, тут <a href="https://thetax.nl">https://thetax.nl</a>. К примеру, с гросс 100к с рулингом на руки – 6к евро в месяц, а без рулинга – 4800 (разница в 20%, а не 30%, поскольку все сложно, налоговая шкала прогрессивная). Когда рулинг заканчивается, одновременно надо <del>побираться</del> решать финансовые вопросы и обновлять разрешение на проживание в стране. Тут вариантов <a href="https://rabotaem.nl/docs/kennismigrant-cherez-5-let-varianty/">как минимум 5</a> – от “оставить все как есть” до смены гражданства. Для ПМЖ и гражданства надо сдать языковые экзамены.</p>

<p>Экзаменов 5 или 6: 4 стандартных, как в IELTS/TOEFL (письмо, аудирование, говорение, чтение) и экзамен на знание голландского общества (+ истории, политики, налогов, страховок и всякой бытовухи). Тем кто не работает, надо еще сдать экзамен по ориентации на нидерландском рабочем рынке: сделать портфолио и пройти интервью. Самое сложное, что интервью на нидерландском (вот ведь засада!).</p>

<p>Я еще проскочил с уровнем A2, это довольно лайтово и, соответственно, недостаточно для нормального диалога с нидерландцами. Уже долго обещают повысить до B1, что вдвое сложнее. Пока оно все свежо в памяти, расскажу. Далее, пожалуй, интересно только тем, кому все это предстоит.</p>

<p>В принципе экзамены не так уж нереально взять с наскока, по-физтеховски (японский? ща докурим, пойдем сдавать). Проходной бал – 6 из 10 по каждому экзамену. Но конечно, лучше хоть чутка выучить язык в целом, чтоб хоть по мелочи уметь объясниться. Мы с женой записались на курсы, 4 месяца прям ходили в школу, сидели за партами и поднимали руки (если кому нужна рекомендация школы в Гааге – пишите в коменты). Это pre-train. После этого еще зафайнтюнились уже чисто под экзамены, 5 занятий с репетитором должно быть достаточно.</p>

<p>Теперь вкратце пройдемся по экзаменам.</p>

<h2 id="lezen-чтение">Lezen (чтение)</h2>

<p>Самый простой экзамен, для ответа на некоторые вопросы не нужно даже знание языка. Достаточно просто уметь глазами Ctrl+F делать. Всего 25 вопросов, дается больше часа, этого прям за глаза.</p>

<p>Вот типичный пример вопроса:</p>

<div style="text-align:center">
<img src="/images/20230726-inburgering/lezen_example.png" width="700px" />
</div>

<p>Рекомендуемся стратегия: сначала внимательно прочитать вопрос, затем варианты ответа, потом уже искать ответ в тексте. Читать текст от начала до конца не имеет смысла.</p>

<p>В качестве подготовки можно просто посмотреть примеры экзамена на <a href="https://inburgeren.nl/">inburgeren.nl</a>, еще можно найти примеры в <a href="https://www.adappel.nl/product-page/studieboek-ingurgering-a2">книге Ad Appel</a>.</p>

<p>У меня тут 10/10 было без проблем.</p>

<h2 id="luisteren-аудирование">Luisteren (аудирование)</h2>

<p>Все похоже, только надо слушать запись или смотреть видео и выбирать варианты ответа. Тоже 25 вопросов, дается 45 минут, но и этого предостаточно. Тут у меня тоже 10/10.</p>

<p>С аудированием меня слегка сбил <a href="https://www.youtube.com/@AdAppeltaaltrainingen">ютуб-канал Ad Appel</a>, там прямо сложно, почти как в живой речи, на экзамене все проще.</p>

<h2 id="schrijven-письмо">Schrijven (письмо)</h2>

<p>Схряйвваардихьхайд (Schrijfvaardigheid) – это прям классика, писать ручкой по бумаге, может оказаться непривычно после многих лет тыкания в клавиши. Здесь дается 40 минут на 4 упражнения: 2 письма написать, один формуляр заполнить и еще что-то типа заметки в газету написать. Вот это уже сложнее, чем чтение или аудирование, тут как минимум надо грамматику знать.</p>

<div style="text-align:center">
<img src="/images/20230726-inburgering/schrijven_example.png" width="700px" />
</div>
<p><em>Типичный пример задания, источник: <a href="https://inburgeren.nl/">inburgeren.nl</a></em></p>

<p>Мы с репетитором наиболее усердно тренировали именно письмо, и почти каждый раз было немало правок. Примеров много также в <a href="https://www.adappel.nl/product-page/studieboek-ingurgering-a2">книге Ad Appel</a>.</p>

<p>Нацарапал я на 9 из 10 (результат пришел спустя 4 недели).</p>

<h2 id="spreken-говорение">Spreken (говорение)</h2>

<p>Тут дают 35 минут на 24 задания – из них 12 это открытые вопросы, еще 12 – по сути аудирование (не понимаю, почему это вообще часть экзамена по говорению).</p>

<p>Типичный открытый вопрос: “В выходные я люблю ездить на море. А как ты проводишь выходные? И с кем?”. На ответ дается всего 15 секунд, важно ответить именно на оба вопроса.</p>

<p>Говорение надо тренировать, так с наскоку плохо выйдет. На <a href="https://www.youtube.com/@AdAppeltaaltrainingen">ютуб-канале Ad Appel</a> немало примеров. Да и с репетитором мы тоже делали упор на говорение.</p>

<p>За spreken я получил 7 из 10 (результат пришел спустя 6 недель). Обратной связи нет, так что почему не максимум, не знаю. Впрочем, проходной 6 из 10, так что все хорошо. Но надо иметь в виду, что именно говорение часто проваливают.</p>

<h2 id="knm-экзамен-на-знание-нидерландского-общества">KNM (экзамен на знание нидерландского общества)</h2>

<p>KNM – это тест на знание голландских обычаев, истории, бытовухи (медицина, налоги, страховки) и т.д. Тут достаточно прочитать <a href="https://www.coutinho.nl/nl/welkom-in-nederland-9789046904886">книгу “Welkom in Nederland”</a> и сделать тестовые экзамены на сайте <a href="https://inburgeren.nl/">https://inburgeren.nl</a>. Я еще надыбил какие-то более сложные тесты, т.к. прошел слух, что вопросы на экзамене сильно отличаются от тестовых. Но в моем случае это было лишним, экзамен прям почти такой же был, как тестовые.</p>

<p>Книгу читать было приятно, написана на уровне A2, так что можно просто брать и читать. В словарь, конечно, приходится подглядывать, по таким темам как коммуналка, здоровье или страховки немало незнакомых слов.</p>

<p>Всего 40 вопросов, дается 45 минут, так что темп бодрый. KNM я на удивление сдал по максимуму, а так вообще с ним бывают проблемы.</p>

<h2 id="ona-ориентация-на-рынке-труда">ONA (ориентация на рынке труда)</h2>

<p>Это мне не надо было сдавать, 6-ой экзамен – для неработающих. Жена будет его сдавать, так что могу обновить пост опосля. Пока же можно поискать по ключевому слову ONA в одном из многочисленных русскоязычных чатиков в NL, например, в <a href="https://t.me/learning_nederlands">этом</a>.</p>

<p>Обсудить пост лучше всего в телеграм-канале New Yorko Times, а именно, <a href="https://t.me/new_yorko_times/157">тут</a>.</p>]]></content><author><name>Yury Kashnitsky</name><email>yury.kashnitsky@gmail.com</email></author><summary type="html"><![CDATA[In this post, I describe my experience in passing Dutch naturalization exams.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230726-inburgering/teaser.png%22%7D" /><media:content medium="image" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230726-inburgering/teaser.png%22%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">A short prompt engineering (chatGPT ‘cooking’) course by Andrew Ng and OpenAI</title><link href="https://yorko.github.io/2023/prompt-engineering-course-ng-openai/" rel="alternate" type="text/html" title="A short prompt engineering (chatGPT ‘cooking’) course by Andrew Ng and OpenAI" /><published>2023-07-19T00:00:00+00:00</published><updated>2023-07-19T00:00:00+00:00</updated><id>https://yorko.github.io/2023/prompt-engineering-course-ng-openai</id><content type="html" xml:base="https://yorko.github.io/2023/prompt-engineering-course-ng-openai/"><![CDATA[<p><em>In this post, I review a short course by Andrew Ng and Isa Fulford on ChatGPT Prompt Engineering for Developers.</em></p>

<hr />

<p>I found <a href="https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/">“ChatGPT Prompt Engineering for Developers”</a> great and would like to give a short overview.</p>

<p>It’s our favorite Andrew Ng in collaboration with Isa Fulford from OpenAI.</p>

<div style="text-align:center">
<img src="/images/20230719-prompt-engineering-course-ng-openai/chatgpt_for_dev.png " width="700px" />
</div>

<p><em>Hi, Andrew! Long time no see! <a href="https://www.deeplearning.ai/short-courses/">image credit</a></em></p>

<h2 id="pros">Pros</h2>

<ul>
  <li>the course is (yet) free</li>
  <li>the course is very short, just ~10 lectures, 5-10 min. each</li>
  <li>very practical, it’s all about examples of using OpenAI APIs</li>
  <li>the platform is great: video on the right, and interactive Jupyter running on the left; thus you can play around with code while watching the video</li>
</ul>

<div style="text-align:center">
<img src="/images/20230719-prompt-engineering-course-ng-openai/dl_ai_course_interface.png" width="700px" />
</div>

<h2 id="some-tips-covered">Some tips covered</h2>

<ul>
  <li>tiny ones like putting the part of the text that you need to process between triple backticks</li>
  <li>making chatGPT respond in a structured way, e.g. JSON so that you don’t have to parse the output with regexp (if you are solving a problem with a regexp, you have two problems)</li>
  <li>all the way through typical downstream tasks (sentiment classification, translation, etc.) up to writing a small pizza order bot with chatGPT backend where basically the whole operation of the bot is described with one long prompt</li>
</ul>

<h2 id="what-i-missed">What I missed</h2>

<ul>
  <li>examples of few-shot learning, how to best provide examples right there in the prompt to improve downstream performance as compared to the zero-shot setup</li>
  <li>how to debug such solutions. Debugging a pizza order bot that follows your long-written prompt with instructions sounds close to impossible</li>
</ul>

<p>Despite the cons, the course is definitely worth 2-3 hours of your time and 0 euro/dollars. I recommend taking a couple of your own tasks (either from pet-projects or real business tasks) and playing with them as you progress through the course.</p>]]></content><author><name>Yury Kashnitsky</name><email>yury.kashnitsky@gmail.com</email></author><summary type="html"><![CDATA[In this post, I review a short course by Andrew Ng and Isa Fulford on ChatGPT Prompt Engineering for Developers. I found “ChatGPT Prompt Engineering for Developers” great and would like to give a short overview. It’s our favorite Andrew Ng in collaboration with Isa Fulford from OpenAI. Hi, Andrew! Long time no see! image credit Pros the course is (yet) free the course is very short, just ~10 lectures, 5-10 min. each very practical, it’s all about examples of using OpenAI APIs the platform is great: video on the right, and interactive Jupyter running on the left; thus you can play around with code while watching the video Some tips covered tiny ones like putting the part of the text that you need to process between triple backticks making chatGPT respond in a structured way, e.g. JSON so that you don’t have to parse the output with regexp (if you are solving a problem with a regexp, you have two problems) all the way through typical downstream tasks (sentiment classification, translation, etc.) up to writing a small pizza order bot with chatGPT backend where basically the whole operation of the bot is described with one long prompt What I missed examples of few-shot learning, how to best provide examples right there in the prompt to improve downstream performance as compared to the zero-shot setup how to debug such solutions. Debugging a pizza order bot that follows your long-written prompt with instructions sounds close to impossible Despite the cons, the course is definitely worth 2-3 hours of your time and 0 euro/dollars. I recommend taking a couple of your own tasks (either from pet-projects or real business tasks) and playing with them as you progress through the course.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230719-prompt-engineering-course-ng-openai/teaser.png%22%7D" /><media:content medium="image" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230719-prompt-engineering-course-ng-openai/teaser.png%22%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Near-duplicate Detection with Locality-Sensitive Hashing and Datasketch</title><link href="https://yorko.github.io/2023/practical-near-dup-detection/" rel="alternate" type="text/html" title="Near-duplicate Detection with Locality-Sensitive Hashing and Datasketch" /><published>2023-06-27T00:00:00+00:00</published><updated>2023-06-27T00:00:00+00:00</updated><id>https://yorko.github.io/2023/practical-near-dup-detection</id><content type="html" xml:base="https://yorko.github.io/2023/practical-near-dup-detection/"><![CDATA[<p><em>In this post, I review Locality-Sensitive Hashing for near-duplicate detection. I demonstrate the principle and provide a quick intro to <code class="language-plaintext highlighter-rouge">Datasketch</code> which is a convenient library to run near-duplicate detection at scale.</em></p>

<hr />

<h2 id="the-problem-being-solved">The problem being solved</h2>

<p>Once I needed to find near-duplicates in a (relatively) large collection of texts ~5 mln. docs.  I wanted the solution to be:</p>

<ul>
  <li>easy-to-use</li>
  <li>scalable</li>
  <li>exact, i.e. when a pair of near-duplicate texts is flagged, we can be confident that those are indeed near-duplicates.</li>
</ul>

<p>I somehow struggled for quite a while to find a solution that would satisfy all conditions. Until I found Locality-Sensitive Hashing (LSH) and its implementation – <a href="https://github.com/ekzhu/datasketch">Datasketch</a>.</p>

<h2 id="minhash-lsh--the-principle">MinHash LSH – the principle</h2>

<h3 id="1-when-we-need-to-deduplicate-a-single-dataset">1. When we need to deduplicate a single dataset</h3>

<div style="text-align:center">
<img src="/images/20230627-practical-near-dup-detection/shingling_minhashing.png" width="700px" />
</div>

<p><em><a href="https://towardsdatascience.com/understanding-locality-sensitive-hashing-49f6d1f6134">image credit</a></em></p>

<p>In a nutshell, this works as follows. For a most typical scenario where we need to identify near-duplicates in a single collection of texts, we perform the following steps:</p>

<ul>
  <li>the text is processed and cut into <em>shingles</em> (overlapping substrings of a fixed size);</li>
  <li>then a set of shingles is <em>minhashed</em>, this involves creating multiple hashes for a set of shingles, so that we end up with a single vector of integers for each piece of text, a.k.a. a <em>signature</em>;</li>
  <li>the dimension of the hash vector is further reduced via Locality-Sensitive Hashing, which is creating a single hash from a number (band) of nearby elements in the hash vector. The resulting vector is also called a <em>bandhash signature</em> or <em>bandhash vector</em>;</li>
  <li>all pairs of signatures where elements match at least in one position, generate <em>candidate pairs</em>;</li>
  <li>(optionally) we can measure the true similarity between corresponding pieces of text to account for errors (False Positives) of the LSH algorithm.</li>
</ul>

<p>I know there are quite a few terms here. Instead of explaining all of them (and thus re-writing something similar to <a href="https://mattilyra.github.io/2017/05/23/document-deduplication-with-lsh.html">this nice blog post</a>) I’d refer to a classical book <a href="http://infolab.stanford.edu/~ullman/mmds/ch3n.pdf">“Mining massive datasets”, ch. 3</a> for an intro into Locality-Sensitive Hashing and finding similar items. In this blog post, we’ll focus on a practical use case of finding near-dups in a large collection of texts.</p>

<h3 id="2-when-we-have-incoming-query-data-that-we-want-to-compare-to-a-large-index-dataset">2. When we have incoming “query” data that we want to compare to a large “index” dataset</h3>

<p>Here “historical” data can be a large dataset, e.g. 5 mln. documents.</p>

<p>The “query” dataset is much smaller, e.g. 10K documents that we receive daily, say via some API, and would like to deduplicate.</p>

<div style="text-align:center">
<img src="/images/20230627-practical-near-dup-detection/illustrating_minhash_lsh_with_query_and_index.png" width="800px" />
</div>

<p>We can do without LSH at all just comparing 10K fresh documents to 5 mln. historical documents. But that’d require 50 bln. comparisons each day might be too computationally prohibitive (a dumb idea leading, above all, to a considerable carbon footprint). LSH is a technique that approximates the exact similarity function.</p>

<p>The essence of the algorithm is to create <strong>signatures</strong> for each piece of text that is identified here by a <code class="language-plaintext highlighter-rouge">DocID</code>. Signatures are just numeric vectors of some fixed dimension, e.g. 128.</p>

<p>For two pieces of text to be considered as candidates for near-duplicates, it suffices for their hash signatures to match in at least one component. In the picture above, a pair highlighted in green is a candidate, and a pair highlighted in orange is another one. Bolded are those matching hash values.</p>

<h2 id="limitations">Limitations</h2>

<ul>
  <li>The method only takes care of the <strong>lexical similarity</strong> not semantical. Thus, with LSH, we won’t identify near-duplicates that differ due to parapharasing, synonym replacement, etc.</li>
  <li>The method is probabilistic, i.e. some errors are allowed. Not all candidates would actually be near-duplicates. One can check this by calculating Jaccard similarity of the candidates. Thus, the algorithm is characterized by <strong>precision</strong> (out of all pairs of candidates found by the algorithm, what’s the proportion of real near-duplicates, i.e. with their Jaccard similarity exceeding the predefined threshold) and <strong>recall</strong> (out of all near-duplicate pairs, what’s the proportion of those found by the algorithm).</li>
  <li>In practice, for a large enough dataset and long pieces of text (e.g. full documents not just titles), LSH tends to work worse in terms of precision while recall can not be known without a crazy carbon footprint. Finding true near-duplicate pairs in a relatively small collection of 50K texts requires &gt;1.2B calls to a Jaccard similarity subroutine.</li>
</ul>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># imports
</span><span class="kn">import</span> <span class="nn">json</span>
<span class="kn">import</span> <span class="nn">pickle</span>
<span class="kn">import</span> <span class="nn">re</span>
<span class="kn">from</span> <span class="nn">pathlib</span> <span class="kn">import</span> <span class="n">Path</span>

<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="n">np</span>
<span class="kn">import</span> <span class="nn">pandas</span> <span class="k">as</span> <span class="n">pd</span>
<span class="kn">from</span> <span class="nn">datasketch</span> <span class="kn">import</span> <span class="n">MinHash</span><span class="p">,</span> <span class="n">MinHashLSH</span>
<span class="kn">from</span> <span class="nn">matplotlib</span> <span class="kn">import</span> <span class="n">pyplot</span> <span class="k">as</span> <span class="n">plt</span>
<span class="kn">from</span> <span class="nn">num2words</span> <span class="kn">import</span> <span class="n">num2words</span>
<span class="kn">from</span> <span class="nn">tqdm</span> <span class="kn">import</span> <span class="n">tqdm</span>
</code></pre></div></div>

<p><strong>Preprocessing and hashing</strong></p>

<p>Essentially, MinHashLSH operates with shingle sets where shingles are overlapping substrings of a fixed size.
The following 4 code cells show how MinHashLSH builds hash vectors (a.k.a. Signatures) for entry texts.</p>

<p>Further, as described in the picture above, for two pieces of text to be considered as candidates for near-duplicates, it suffices for their hash signatures to match in at least one component</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">s</span> <span class="o">=</span> <span class="s">"this is a piece of text"</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">shingle_size</span> <span class="o">=</span> <span class="mi">4</span>

<span class="n">shingle_set</span> <span class="o">=</span> <span class="p">{</span><span class="n">s</span><span class="p">[</span><span class="n">i</span> <span class="p">:</span> <span class="n">i</span> <span class="o">+</span> <span class="n">shingle_size</span><span class="p">]</span> 
               <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">s</span><span class="p">)</span> <span class="o">-</span> <span class="n">shingle_size</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)}</span>
<span class="n">shingle_set</span>
</code></pre></div></div>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{' a p',
 ' is ',
 ' of ',
 ' pie',
 ' tex',
 'a pi',
 'ce o',
 'e of',
 'ece ',
 'f te',
 'his ',
 'iece',
 'is a',
 'is i',
 'of t',
 'piec',
 's a ',
 's is',
 'text',
 'this'}
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">hash_func</span><span class="p">(</span><span class="n">a_string</span><span class="p">,</span> <span class="n">salt</span><span class="p">:</span> <span class="nb">int</span> <span class="o">=</span> <span class="mi">1</span><span class="p">):</span>
    <span class="k">return</span> <span class="nb">hash</span><span class="p">(</span><span class="n">a_string</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">salt</span><span class="p">))</span>
</code></pre></div></div>

<p>These are the 5 components of a toy 5-dimensional hash signature. Each one of them is created by hashing all shingles and taking a min. value of the hashes.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">salt</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">5</span><span class="p">)):</span>
    <span class="k">print</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="nb">min</span><span class="p">([</span><span class="n">hash_func</span><span class="p">(</span><span class="n">el</span><span class="p">,</span> <span class="n">salt</span><span class="o">=</span><span class="n">salt</span><span class="p">)</span> <span class="k">for</span> <span class="n">el</span> <span class="ow">in</span> <span class="n">shingle_set</span><span class="p">]))</span>
</code></pre></div></div>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>0 -7220920153181112185
1 -9127360350460247126
2 -8803612098918371157
3 -8027849914885749588
4 -9069105076530742277
</code></pre></div></div>

<h2 id="datasketch-lsh--a-toy-example">Datasketch LSH – a toy example</h2>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">datasketch</span> <span class="kn">import</span> <span class="n">MinHash</span><span class="p">,</span> <span class="n">MinHashLSH</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">SIMILARITY_THRESHOLD</span> <span class="o">=</span> <span class="mf">0.6</span>
<span class="n">NUM_PERMS</span> <span class="o">=</span> <span class="mi">96</span>
<span class="n">SHINGLE_SIZE</span> <span class="o">=</span> <span class="mi">4</span>
</code></pre></div></div>

<p>Three similar strings. We’ll index first two, and then look for near-duplicates for the 3rd one.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">s1</span> <span class="o">=</span> <span class="s">"This is a piece of text"</span>
<span class="n">s2</span> <span class="o">=</span> <span class="s">"This is a similar piece of text"</span>
<span class="n">s3</span> <span class="o">=</span> <span class="s">"This is also a similar piece of text"</span>
</code></pre></div></div>

<p>Inserting strings split by whitespaces into <code class="language-plaintext highlighter-rouge">MinHash</code> objects.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">minhash1</span> <span class="o">=</span> <span class="n">MinHash</span><span class="p">(</span><span class="n">num_perm</span><span class="o">=</span><span class="n">NUM_PERMS</span><span class="p">)</span>
<span class="n">minhash2</span> <span class="o">=</span> <span class="n">MinHash</span><span class="p">(</span><span class="n">num_perm</span><span class="o">=</span><span class="n">NUM_PERMS</span><span class="p">)</span>
<span class="n">minhash3</span> <span class="o">=</span> <span class="n">MinHash</span><span class="p">(</span><span class="n">num_perm</span><span class="o">=</span><span class="n">NUM_PERMS</span><span class="p">)</span>

<span class="k">for</span> <span class="n">d</span> <span class="ow">in</span> <span class="nb">set</span><span class="p">(</span><span class="n">s1</span><span class="p">.</span><span class="n">split</span><span class="p">()):</span>
    <span class="n">minhash1</span><span class="p">.</span><span class="n">update</span><span class="p">(</span><span class="n">d</span><span class="p">.</span><span class="n">encode</span><span class="p">(</span><span class="s">"utf8"</span><span class="p">))</span>
<span class="k">for</span> <span class="n">d</span> <span class="ow">in</span> <span class="nb">set</span><span class="p">(</span><span class="n">s2</span><span class="p">.</span><span class="n">split</span><span class="p">()):</span>
    <span class="n">minhash2</span><span class="p">.</span><span class="n">update</span><span class="p">(</span><span class="n">d</span><span class="p">.</span><span class="n">encode</span><span class="p">(</span><span class="s">"utf8"</span><span class="p">))</span>
<span class="k">for</span> <span class="n">d</span> <span class="ow">in</span> <span class="nb">set</span><span class="p">(</span><span class="n">s3</span><span class="p">.</span><span class="n">split</span><span class="p">()):</span>
    <span class="n">minhash3</span><span class="p">.</span><span class="n">update</span><span class="p">(</span><span class="n">d</span><span class="p">.</span><span class="n">encode</span><span class="p">(</span><span class="s">"utf8"</span><span class="p">))</span>
</code></pre></div></div>

<p>Create LSH index and insert first 2 <code class="language-plaintext highlighter-rouge">MinHash</code>objects in it.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">lsh</span> <span class="o">=</span> <span class="n">MinHashLSH</span><span class="p">(</span><span class="n">threshold</span><span class="o">=</span><span class="n">SIMILARITY_THRESHOLD</span><span class="p">,</span> <span class="n">num_perm</span><span class="o">=</span><span class="n">NUM_PERMS</span><span class="p">)</span>
<span class="n">lsh</span><span class="p">.</span><span class="n">insert</span><span class="p">(</span><span class="s">"text1"</span><span class="p">,</span> <span class="n">minhash1</span><span class="p">)</span>
<span class="n">lsh</span><span class="p">.</span><span class="n">insert</span><span class="p">(</span><span class="s">"text2"</span><span class="p">,</span> <span class="n">minhash2</span><span class="p">)</span>
</code></pre></div></div>

<p>Querying near-duplicates for the 3rd piece of text.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">lsh</span><span class="p">.</span><span class="n">query</span><span class="p">(</span><span class="n">minhash3</span><span class="p">)</span>
</code></pre></div></div>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>['text2'
</code></pre></div></div>

<p><strong>Same with Redis storage as a backend, not Python dictionaries</strong></p>

<p>See <a href="http://ekzhu.com/datasketch/lsh.html">MinHashLSH docs</a> to configure the algo to run with the Redis backend. The idea is that to query LSH for near-duplicates, we only need to make lookups to get signatures. Redis is an in-memory database that allows for very fast lookups, also, it scales much better than Python dictionaries.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">lsh_redis</span> <span class="o">=</span> <span class="n">MinHashLSH</span><span class="p">(</span>
    <span class="n">threshold</span><span class="o">=</span><span class="n">SIMILARITY_THRESHOLD</span><span class="p">,</span>
    <span class="n">num_perm</span><span class="o">=</span><span class="n">NUM_PERMS</span><span class="p">,</span>
    <span class="n">storage_config</span><span class="o">=</span><span class="p">{</span><span class="s">"type"</span><span class="p">:</span> <span class="s">"redis"</span><span class="p">,</span> 
                    <span class="s">"redis"</span><span class="p">:</span> <span class="p">{</span><span class="s">"host"</span><span class="p">:</span> <span class="s">"localhost"</span><span class="p">,</span> 
                              <span class="s">"port"</span><span class="p">:</span> <span class="mi">6379</span><span class="p">}},</span>
<span class="p">)</span>
<span class="n">lsh_redis</span><span class="p">.</span><span class="n">insert</span><span class="p">(</span><span class="s">"text1"</span><span class="p">,</span> <span class="n">minhash1</span><span class="p">)</span>
<span class="n">lsh_redis</span><span class="p">.</span><span class="n">insert</span><span class="p">(</span><span class="s">"text2"</span><span class="p">,</span> <span class="n">minhash2</span><span class="p">)</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">lsh_redis</span><span class="p">.</span><span class="n">query</span><span class="p">(</span><span class="n">minhash3</span><span class="p">)</span>
</code></pre></div></div>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>['text2']
</code></pre></div></div>

<h2 id="running-lsh-near-duplicate-detection-with-a-real-world-dataset">Running LSH near-duplicate detection with a real-world dataset</h2>

<p>Further, we run the algorithm with some realistic dataset – news about cryptocurrencies, <a href="https://www.kaggle.com/kashnitsky/news-about-major-cryptocurrencies-20132018-40k">Kaggle dataset</a>.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">SIMILARITY_THRESHOLD</span> <span class="o">=</span> <span class="mf">0.8</span>
<span class="n">NUM_PERMS</span> <span class="o">=</span> <span class="mi">128</span>
<span class="n">SHINGLE_SIZE</span> <span class="o">=</span> <span class="mi">4</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">lsh</span> <span class="o">=</span> <span class="n">MinHashLSH</span><span class="p">(</span><span class="n">threshold</span><span class="o">=</span><span class="n">SIMILARITY_THRESHOLD</span><span class="p">,</span> <span class="n">num_perm</span><span class="o">=</span><span class="n">NUM_PERMS</span><span class="p">)</span>
</code></pre></div></div>

<p>Reading data</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># You can download the dataset and customize this path
</span><span class="n">PATH_TO_DATA</span> <span class="o">=</span> <span class="n">Path</span><span class="p">(</span><span class="s">"crypto_news"</span><span class="p">)</span>
</code></pre></div></div>

<p>The following two parts of the dataset would imitate the historical part (<code class="language-plaintext highlighter-rouge">index_df</code>) and the query part (<code class="language-plaintext highlighter-rouge">query_df</code>). For each title in the qury part, we’d like to find near-duplicate titles in the historical part.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">index_df</span> <span class="o">=</span> <span class="n">pd</span><span class="p">.</span><span class="n">read_csv</span><span class="p">(</span><span class="n">PATH_TO_DATA</span> <span class="o">/</span> 
                       <span class="s">"crypto_news_parsed_2013-2017_train.csv"</span><span class="p">)</span>
<span class="n">query_df</span> <span class="o">=</span> <span class="n">pd</span><span class="p">.</span><span class="n">read_csv</span><span class="p">(</span><span class="n">PATH_TO_DATA</span> <span class="o">/</span> 
                       <span class="s">"crypto_news_parsed_2018_validation.csv"</span><span class="p">)</span>
</code></pre></div></div>

<p>We’ll identify each title by some id, so reindexing. Also, there are quite a few fields in the dataset, we’ll take care only of the <code class="language-plaintext highlighter-rouge">title</code> field.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">index_df</span><span class="p">.</span><span class="n">index</span> <span class="o">=</span> <span class="p">[</span><span class="sa">f</span><span class="s">'train_</span><span class="si">{</span><span class="n">i</span><span class="si">}</span><span class="s">'</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">index_df</span><span class="p">))]</span>
<span class="n">query_df</span><span class="p">.</span><span class="n">index</span> <span class="o">=</span> <span class="p">[</span><span class="sa">f</span><span class="s">'val_</span><span class="si">{</span><span class="n">i</span><span class="si">}</span><span class="s">'</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">query_df</span><span class="p">))]</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">index_df</span><span class="p">[[</span><span class="s">'title'</span><span class="p">]].</span><span class="n">head</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span>
</code></pre></div></div>

<div>
<style scoped="">
    .dataframe tbody tr th:only-of-type {
        vertical-align: middle;
    }

    .dataframe tbody tr th {
        vertical-align: top;
    }

    .dataframe thead th {
        text-align: right;
    }
</style>
<table border="1" class="dataframe">
  <thead>
    <tr style="text-align: right;">
      <th></th>
      <th>title</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>train_0</th>
      <td>Bitcoin Price Update: Will China Lead us Down?</td>
    </tr>
    <tr>
      <th>train_1</th>
      <td>Key Bitcoin Price Levels for Week 51 (15 – 22 ...</td>
    </tr>
  </tbody>
</table>
</div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">query_df</span><span class="p">[[</span><span class="s">'title'</span><span class="p">]].</span><span class="n">head</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span>
</code></pre></div></div>

<div>
<style scoped="">
    .dataframe tbody tr th:only-of-type {
        vertical-align: middle;
    }

    .dataframe tbody tr th {
        vertical-align: top;
    }

    .dataframe thead th {
        text-align: right;
    }
</style>
<table border="1" class="dataframe">
  <thead>
    <tr style="text-align: right;">
      <th></th>
      <th>title</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>val_0</th>
      <td>Paris Hilton’s Hotel Mogul Father to Sell $38 ...</td>
    </tr>
    <tr>
      <th>val_1</th>
      <td>Playboy Sues Cryptocurrency Company for Breach...</td>
    </tr>
  </tbody>
</table>
</div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">preprocess</span><span class="p">(</span><span class="n">string</span><span class="p">,</span> <span class="n">maxlen</span><span class="o">=</span><span class="mi">500</span><span class="p">):</span>
    <span class="n">tmp_string</span> <span class="o">=</span> <span class="n">string</span><span class="p">[:</span><span class="n">maxlen</span><span class="p">]</span>
    <span class="n">tmp_string</span> <span class="o">=</span> <span class="n">re</span><span class="p">.</span><span class="n">sub</span><span class="p">(</span><span class="sa">r</span><span class="s">"(\d+)"</span><span class="p">,</span> 
                        <span class="k">lambda</span> <span class="n">x</span><span class="p">:</span> <span class="n">num2words</span><span class="p">(</span><span class="nb">int</span><span class="p">(</span><span class="n">x</span><span class="p">.</span><span class="n">group</span><span class="p">(</span><span class="mi">0</span><span class="p">))),</span> 
                        <span class="n">tmp_string</span><span class="p">)</span>
    <span class="n">res</span> <span class="o">=</span> <span class="n">re</span><span class="p">.</span><span class="n">sub</span><span class="p">(</span><span class="sa">r</span><span class="s">"[\W]+"</span><span class="p">,</span> <span class="s">""</span><span class="p">,</span> <span class="n">tmp_string</span><span class="p">).</span><span class="n">lower</span><span class="p">()</span>
    <span class="k">return</span> <span class="n">res</span>


<span class="k">def</span> <span class="nf">_shingle</span><span class="p">(</span><span class="n">string</span><span class="p">,</span> <span class="n">shingle_size</span><span class="o">=</span><span class="mi">4</span><span class="p">):</span>
    <span class="n">shings</span> <span class="o">=</span> <span class="p">{</span>
        <span class="n">string</span><span class="p">[</span><span class="n">i</span> <span class="p">:</span> <span class="n">i</span> <span class="o">+</span> <span class="n">shingle_size</span><span class="p">]</span> 
        <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">string</span><span class="p">)</span> <span class="o">-</span> <span class="n">shingle_size</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span>
    <span class="p">}</span>
    <span class="k">return</span> <span class="nb">set</span><span class="p">(</span><span class="n">shings</span><span class="p">)</span>
</code></pre></div></div>

<p><strong>LSH from Datasketch</strong></p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">lsh</span> <span class="o">=</span> <span class="n">MinHashLSH</span><span class="p">(</span><span class="n">threshold</span><span class="o">=</span><span class="n">SIMILARITY_THRESHOLD</span><span class="p">,</span> <span class="n">num_perm</span><span class="o">=</span><span class="n">NUM_PERMS</span><span class="p">)</span>
</code></pre></div></div>

<p><strong>Populating the index</strong></p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">for</span> <span class="n">id_</span><span class="p">,</span> <span class="n">title</span> <span class="ow">in</span> <span class="n">tqdm</span><span class="p">(</span><span class="n">index_df</span><span class="p">[</span><span class="s">'title'</span><span class="p">].</span><span class="n">iteritems</span><span class="p">()):</span>
    
    <span class="n">title_shingles</span> <span class="o">=</span> <span class="n">_shingle</span><span class="p">(</span><span class="n">preprocess</span><span class="p">(</span><span class="n">title</span><span class="p">),</span> 
                              <span class="n">shingle_size</span><span class="o">=</span><span class="n">SHINGLE_SIZE</span><span class="p">)</span>

    <span class="n">title_minhash</span> <span class="o">=</span> <span class="n">MinHash</span><span class="p">(</span><span class="n">num_perm</span><span class="o">=</span><span class="n">NUM_PERMS</span><span class="p">)</span>

    <span class="k">for</span> <span class="n">shing</span> <span class="ow">in</span> <span class="n">title_shingles</span><span class="p">:</span>
        <span class="n">title_minhash</span><span class="p">.</span><span class="n">update</span><span class="p">(</span><span class="n">shing</span><span class="p">.</span><span class="n">encode</span><span class="p">(</span><span class="s">"utf8"</span><span class="p">))</span>

    <span class="n">lsh</span><span class="p">.</span><span class="n">insert</span><span class="p">(</span><span class="n">id_</span><span class="p">,</span> <span class="n">title_minhash</span><span class="p">,</span> <span class="n">check_duplication</span><span class="o">=</span><span class="bp">False</span><span class="p">)</span>
</code></pre></div></div>

<p>We’ve indexed that many titles:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">len</span><span class="p">(</span><span class="n">lsh</span><span class="p">.</span><span class="n">get_counts</span><span class="p">()[</span><span class="mi">0</span><span class="p">])</span>
</code></pre></div></div>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>27462
</code></pre></div></div>

<p>If needed, we can serialize the LSH object</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s">"lsh.pkl"</span><span class="p">,</span> <span class="s">"wb"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
    <span class="n">pickle</span><span class="p">.</span><span class="n">dump</span><span class="p">(</span><span class="n">lsh</span><span class="p">,</span> <span class="n">f</span><span class="p">)</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">!</span><span class="n">du</span> <span class="o">-</span><span class="n">hc</span> <span class="n">lsh</span><span class="p">.</span><span class="n">pkl</span>
</code></pre></div></div>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> 35M	lsh.pkl
 35M	total
</code></pre></div></div>

<p><strong>Get near-duplicates for the query data</strong></p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">dup_dict</span> <span class="o">=</span> <span class="p">{}</span>

<span class="k">for</span> <span class="n">id_</span><span class="p">,</span> <span class="n">title</span> <span class="ow">in</span> <span class="n">tqdm</span><span class="p">(</span><span class="n">query_df</span><span class="p">[</span><span class="s">'title'</span><span class="p">].</span><span class="n">iteritems</span><span class="p">()):</span>

    <span class="n">title_shingles</span> <span class="o">=</span> <span class="n">_shingle</span><span class="p">(</span><span class="n">preprocess</span><span class="p">(</span><span class="n">title</span><span class="p">),</span> 
                              <span class="n">shingle_size</span><span class="o">=</span><span class="n">SHINGLE_SIZE</span><span class="p">)</span>

    <span class="n">title_minhash</span> <span class="o">=</span> <span class="n">MinHash</span><span class="p">(</span><span class="n">num_perm</span><span class="o">=</span><span class="n">NUM_PERMS</span><span class="p">)</span>

    <span class="k">for</span> <span class="n">shing</span> <span class="ow">in</span> <span class="n">title_shingles</span><span class="p">:</span>
        <span class="n">title_minhash</span><span class="p">.</span><span class="n">update</span><span class="p">(</span><span class="n">shing</span><span class="p">.</span><span class="n">encode</span><span class="p">(</span><span class="s">"utf8"</span><span class="p">))</span>

    <span class="n">dups</span> <span class="o">=</span> <span class="n">lsh</span><span class="p">.</span><span class="n">query</span><span class="p">(</span><span class="n">title_minhash</span><span class="p">)</span>
    <span class="n">dup_dict</span><span class="p">[</span><span class="n">id_</span><span class="p">]</span> <span class="o">=</span> <span class="n">dups</span>
</code></pre></div></div>

<p><strong>(Optional step) Analyze true Jaccard similarity</strong></p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">jaccard_similarity</span><span class="p">(</span><span class="n">list1</span><span class="p">,</span> <span class="n">list2</span><span class="p">):</span>
    <span class="n">s1</span> <span class="o">=</span> <span class="nb">set</span><span class="p">(</span><span class="n">list1</span><span class="p">)</span>
    <span class="n">s2</span> <span class="o">=</span> <span class="nb">set</span><span class="p">(</span><span class="n">list2</span><span class="p">)</span>
    <span class="k">return</span> <span class="nb">len</span><span class="p">(</span><span class="n">s1</span><span class="p">.</span><span class="n">intersection</span><span class="p">(</span><span class="n">s2</span><span class="p">))</span> <span class="o">/</span> <span class="nb">len</span><span class="p">(</span><span class="n">s1</span><span class="p">.</span><span class="n">union</span><span class="p">(</span><span class="n">s2</span><span class="p">))</span>
</code></pre></div></div>

<p>To access precision, we calculate the actual Jaccard similarity for the candidates identified by LSH.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">jaccard_sims</span> <span class="o">=</span> <span class="p">[]</span>

<span class="k">for</span> <span class="n">id_</span><span class="p">,</span> <span class="n">dups</span> <span class="ow">in</span> <span class="n">tqdm</span><span class="p">(</span><span class="n">dup_dict</span><span class="p">.</span><span class="n">items</span><span class="p">()):</span>
    <span class="k">if</span> <span class="n">dups</span><span class="p">:</span>
        <span class="n">shingle_query_title</span> <span class="o">=</span> <span class="n">_shingle</span><span class="p">(</span>
                              <span class="n">preprocess</span><span class="p">(</span>
                              <span class="n">query_df</span><span class="p">.</span><span class="n">loc</span><span class="p">[</span><span class="n">id_</span><span class="p">,</span> <span class="s">"title"</span><span class="p">]))</span>
        <span class="k">for</span> <span class="n">dup_id</span> <span class="ow">in</span> <span class="n">dups</span><span class="p">:</span>
            <span class="n">shingle_indexed_title</span> <span class="o">=</span> <span class="n">_shingle</span><span class="p">(</span>
                                     <span class="n">preprocess</span><span class="p">(</span>
                                     <span class="n">index_df</span><span class="p">.</span><span class="n">loc</span><span class="p">[</span><span class="n">dup_id</span><span class="p">,</span> <span class="s">"title"</span><span class="p">]))</span>
            <span class="n">sim</span> <span class="o">=</span> <span class="n">jaccard_similarity</span><span class="p">(</span>
            	<span class="n">shingle_query_title</span><span class="p">,</span>
            	<span class="n">shingle_indexed_title</span><span class="p">)</span>
            <span class="n">jaccard_sims</span><span class="p">.</span><span class="n">append</span><span class="p">(</span><span class="n">sim</span><span class="p">)</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">plt</span><span class="p">.</span><span class="n">hist</span><span class="p">(</span><span class="n">jaccard_sims</span><span class="p">,</span> <span class="n">bins</span><span class="o">=</span><span class="mi">20</span><span class="p">);</span>
</code></pre></div></div>

<div style="text-align:center">
<img src="/images/20230627-practical-near-dup-detection/precision_scores.png" width="500px" />
</div>

<p>The distribution is nice, mostly, LSH indeed captures similar pairs.</p>

<p><strong>Precision</strong></p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">(</span><span class="n">pd</span><span class="p">.</span><span class="n">Series</span><span class="p">(</span><span class="n">jaccard_sims</span><span class="p">)</span> <span class="o">&gt;=</span> <span class="n">SIMILARITY_THRESHOLD</span><span class="p">).</span><span class="nb">sum</span><span class="p">()</span> <span class="o">/</span> <span class="nb">len</span><span class="p">(</span><span class="n">jaccard_sims</span><span class="p">)</span>
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>0.8334
</code></pre></div></div>

<blockquote>
  <p><strong><em>NOTE:</em></strong>  That’s the precision of the LSH algorithm. In practice, it’s very easy to have 100% precision with an additional effort of calculating the actual Jaccard similarity for the candidate pairs (as done above) and filtering out false positives, i.e. the candidates pairs with similarity below the predefined threshold.</p>
</blockquote>

<p><strong>Recall</strong></p>

<p>This is a very computationally intensive step (that we are speeding up with multiprocessing) – we calculate all pairwise Jaccard similarities between 11k query titles and 27k indexed titles and see how many true near-duplicates the LSH algo missed.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">shingled_query_text</span> <span class="o">=</span> <span class="p">[</span>
    <span class="n">_shingle</span><span class="p">(</span><span class="n">preprocess</span><span class="p">(</span><span class="n">el</span><span class="p">))</span> <span class="k">for</span> <span class="n">el</span> <span class="ow">in</span> <span class="n">tqdm</span><span class="p">(</span><span class="n">query_df</span><span class="p">[</span><span class="s">"Title"</span><span class="p">])</span>
<span class="p">]</span>
<span class="n">shingled_index_texts</span> <span class="o">=</span> <span class="p">[</span>
    <span class="n">_shingle</span><span class="p">(</span><span class="n">preprocess</span><span class="p">(</span><span class="n">el</span><span class="p">))</span> <span class="k">for</span> <span class="n">el</span> <span class="ow">in</span> <span class="n">tqdm</span><span class="p">(</span><span class="n">index_df</span><span class="p">[</span><span class="s">"Title"</span><span class="p">])</span>
<span class="p">]</span>
</code></pre></div></div>

<p>Building pairwise Jaccard similarity matrix with multiprocessing</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">multiprocessing</span> <span class="kn">import</span> <span class="n">Pool</span>

<span class="k">class</span> <span class="nc">JaccardPool</span><span class="p">(</span><span class="nb">object</span><span class="p">):</span>
    <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">archive_shingles</span><span class="p">):</span>
        <span class="bp">self</span><span class="p">.</span><span class="n">archive_shingles</span> <span class="o">=</span> <span class="n">archive_shingles</span>

    <span class="k">def</span> <span class="nf">__call__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">val_shing</span><span class="p">):</span>
        <span class="s">"""
        :param val_shing: a shingle set to compare with each one in
                          `archive_shingles` and to calculate Jaccard similarity
        """</span>
        <span class="k">return</span> <span class="p">[</span>
            <span class="n">jaccard_similarity</span><span class="p">(</span><span class="n">val_shing</span><span class="p">,</span> <span class="n">arch_shing</span><span class="p">)</span>
            <span class="k">for</span> <span class="n">arch_shing</span> <span class="ow">in</span> <span class="bp">self</span><span class="p">.</span><span class="n">archive_shingles</span>
        <span class="p">]</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">try</span><span class="p">:</span>
    <span class="n">pool</span> <span class="o">=</span> <span class="n">Pool</span><span class="p">(</span><span class="mi">8</span><span class="p">)</span>  <span class="c1"># on 8 processors
</span>    <span class="n">engine</span> <span class="o">=</span> <span class="n">JaccardPool</span><span class="p">(</span><span class="n">archive_shingles</span><span class="o">=</span><span class="n">shingled_index_texts</span><span class="p">)</span>
    <span class="n">sims</span> <span class="o">=</span> <span class="n">pool</span><span class="p">.</span><span class="nb">map</span><span class="p">(</span><span class="n">engine</span><span class="p">,</span> <span class="n">shingled_query_text</span><span class="p">)</span>
<span class="k">finally</span><span class="p">:</span>  <span class="c1"># To make sure processes are closed in the end, even if errors happen
</span>    <span class="n">pool</span><span class="p">.</span><span class="n">close</span><span class="p">()</span>
    <span class="n">pool</span><span class="p">.</span><span class="n">join</span><span class="p">()</span>
</code></pre></div></div>

<p>Now we have a similarity matrix of size [11k, 27k] which we can use to compute recall, i.e. how many pairs with Jaccard similarity over the given threshold we managed to find.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">sim_matrix</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">vstack</span><span class="p">(</span><span class="n">sims</span><span class="p">)</span>
<span class="k">print</span><span class="p">((</span><span class="n">pd</span><span class="p">.</span><span class="n">Series</span><span class="p">(</span><span class="n">jaccard_sims</span><span class="p">)</span> <span class="o">&gt;=</span> <span class="n">SIMILARITY_THRESHOLD</span><span class="p">).</span><span class="nb">sum</span><span class="p">()</span> <span class="o">/</span> 
      <span class="p">(</span><span class="n">sim_matrix</span> <span class="o">&gt;=</span> <span class="n">SIMILARITY_THRESHOLD</span><span class="p">).</span><span class="nb">sum</span><span class="p">())</span>
</code></pre></div></div>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>0.925
</code></pre></div></div>

<blockquote>
  <p><strong><em>NOTE:</em></strong>  We see that with short titles recall is pretty high. In reality though, for large datasets, recall is unknown (without a crazy carbon footprint from computing all pairwise text similarities).</p>
</blockquote>

<h2 id="literature">Literature</h2>

<ul>
  <li><a href="http://infolab.stanford.edu/~ullman/mmds/ch3n.pdf">“Mining massive datasets”, ch. 3</a> – the theoretical foundation of Locality-Sensitive Hashing</li>
  <li><a href="https://mattilyra.github.io/2017/05/23/document-deduplication-with-lsh.html">A blog post</a> on this topic</li>
  <li><a href="https://github.com/ekzhu/datasketch">Datasketch</a> – a Python library implementing, among all, the <code class="language-plaintext highlighter-rouge">MinHashLSH</code> algorithm</li>
</ul>]]></content><author><name>Yury Kashnitsky</name><email>yury.kashnitsky@gmail.com</email></author><summary type="html"><![CDATA[In this post, I review Locality-Sensitive Hashing for near-duplicate detection. I demonstrate the principle and provide a quick intro to Datasketch which is a convenient library to run near-duplicate detection at scale. The problem being solved Once I needed to find near-duplicates in a (relatively) large collection of texts ~5 mln. docs. I wanted the solution to be: easy-to-use scalable exact, i.e. when a pair of near-duplicate texts is flagged, we can be confident that those are indeed near-duplicates. I somehow struggled for quite a while to find a solution that would satisfy all conditions. Until I found Locality-Sensitive Hashing (LSH) and its implementation – Datasketch. MinHash LSH – the principle 1. When we need to deduplicate a single dataset image credit In a nutshell, this works as follows. For a most typical scenario where we need to identify near-duplicates in a single collection of texts, we perform the following steps: the text is processed and cut into shingles (overlapping substrings of a fixed size); then a set of shingles is minhashed, this involves creating multiple hashes for a set of shingles, so that we end up with a single vector of integers for each piece of text, a.k.a. a signature; the dimension of the hash vector is further reduced via Locality-Sensitive Hashing, which is creating a single hash from a number (band) of nearby elements in the hash vector. The resulting vector is also called a bandhash signature or bandhash vector; all pairs of signatures where elements match at least in one position, generate candidate pairs; (optionally) we can measure the true similarity between corresponding pieces of text to account for errors (False Positives) of the LSH algorithm. I know there are quite a few terms here. Instead of explaining all of them (and thus re-writing something similar to this nice blog post) I’d refer to a classical book “Mining massive datasets”, ch. 3 for an intro into Locality-Sensitive Hashing and finding similar items. In this blog post, we’ll focus on a practical use case of finding near-dups in a large collection of texts. 2. When we have incoming “query” data that we want to compare to a large “index” dataset Here “historical” data can be a large dataset, e.g. 5 mln. documents. The “query” dataset is much smaller, e.g. 10K documents that we receive daily, say via some API, and would like to deduplicate. We can do without LSH at all just comparing 10K fresh documents to 5 mln. historical documents. But that’d require 50 bln. comparisons each day might be too computationally prohibitive (a dumb idea leading, above all, to a considerable carbon footprint). LSH is a technique that approximates the exact similarity function. The essence of the algorithm is to create signatures for each piece of text that is identified here by a DocID. Signatures are just numeric vectors of some fixed dimension, e.g. 128. For two pieces of text to be considered as candidates for near-duplicates, it suffices for their hash signatures to match in at least one component. In the picture above, a pair highlighted in green is a candidate, and a pair highlighted in orange is another one. Bolded are those matching hash values. Limitations The method only takes care of the lexical similarity not semantical. Thus, with LSH, we won’t identify near-duplicates that differ due to parapharasing, synonym replacement, etc. The method is probabilistic, i.e. some errors are allowed. Not all candidates would actually be near-duplicates. One can check this by calculating Jaccard similarity of the candidates. Thus, the algorithm is characterized by precision (out of all pairs of candidates found by the algorithm, what’s the proportion of real near-duplicates, i.e. with their Jaccard similarity exceeding the predefined threshold) and recall (out of all near-duplicate pairs, what’s the proportion of those found by the algorithm). In practice, for a large enough dataset and long pieces of text (e.g. full documents not just titles), LSH tends to work worse in terms of precision while recall can not be known without a crazy carbon footprint. Finding true near-duplicate pairs in a relatively small collection of 50K texts requires &gt;1.2B calls to a Jaccard similarity subroutine. # imports import json import pickle import re from pathlib import Path import numpy as np import pandas as pd from datasketch import MinHash, MinHashLSH from matplotlib import pyplot as plt from num2words import num2words from tqdm import tqdm Preprocessing and hashing Essentially, MinHashLSH operates with shingle sets where shingles are overlapping substrings of a fixed size. The following 4 code cells show how MinHashLSH builds hash vectors (a.k.a. Signatures) for entry texts. Further, as described in the picture above, for two pieces of text to be considered as candidates for near-duplicates, it suffices for their hash signatures to match in at least one component s = "this is a piece of text" shingle_size = 4 shingle_set = {s[i : i + shingle_size] for i in range(len(s) - shingle_size + 1)} shingle_set {' a p', ' is ', ' of ', ' pie', ' tex', 'a pi', 'ce o', 'e of', 'ece ', 'f te', 'his ', 'iece', 'is a', 'is i', 'of t', 'piec', 's a ', 's is', 'text', 'this'} def hash_func(a_string, salt: int = 1): return hash(a_string + str(salt)) These are the 5 components of a toy 5-dimensional hash signature. Each one of them is created by hashing all shingles and taking a min. value of the hashes. for i, salt in enumerate(range(5)): print(i, min([hash_func(el, salt=salt) for el in shingle_set])) 0 -7220920153181112185 1 -9127360350460247126 2 -8803612098918371157 3 -8027849914885749588 4 -9069105076530742277 Datasketch LSH – a toy example from datasketch import MinHash, MinHashLSH SIMILARITY_THRESHOLD = 0.6 NUM_PERMS = 96 SHINGLE_SIZE = 4 Three similar strings. We’ll index first two, and then look for near-duplicates for the 3rd one. s1 = "This is a piece of text" s2 = "This is a similar piece of text" s3 = "This is also a similar piece of text" Inserting strings split by whitespaces into MinHash objects. minhash1 = MinHash(num_perm=NUM_PERMS) minhash2 = MinHash(num_perm=NUM_PERMS) minhash3 = MinHash(num_perm=NUM_PERMS) for d in set(s1.split()): minhash1.update(d.encode("utf8")) for d in set(s2.split()): minhash2.update(d.encode("utf8")) for d in set(s3.split()): minhash3.update(d.encode("utf8")) Create LSH index and insert first 2 MinHashobjects in it. lsh = MinHashLSH(threshold=SIMILARITY_THRESHOLD, num_perm=NUM_PERMS) lsh.insert("text1", minhash1) lsh.insert("text2", minhash2) Querying near-duplicates for the 3rd piece of text. lsh.query(minhash3) ['text2' Same with Redis storage as a backend, not Python dictionaries See MinHashLSH docs to configure the algo to run with the Redis backend. The idea is that to query LSH for near-duplicates, we only need to make lookups to get signatures. Redis is an in-memory database that allows for very fast lookups, also, it scales much better than Python dictionaries. lsh_redis = MinHashLSH( threshold=SIMILARITY_THRESHOLD, num_perm=NUM_PERMS, storage_config={"type": "redis", "redis": {"host": "localhost", "port": 6379}}, ) lsh_redis.insert("text1", minhash1) lsh_redis.insert("text2", minhash2) lsh_redis.query(minhash3) ['text2'] Running LSH near-duplicate detection with a real-world dataset Further, we run the algorithm with some realistic dataset – news about cryptocurrencies, Kaggle dataset. SIMILARITY_THRESHOLD = 0.8 NUM_PERMS = 128 SHINGLE_SIZE = 4 lsh = MinHashLSH(threshold=SIMILARITY_THRESHOLD, num_perm=NUM_PERMS) Reading data # You can download the dataset and customize this path PATH_TO_DATA = Path("crypto_news") The following two parts of the dataset would imitate the historical part (index_df) and the query part (query_df). For each title in the qury part, we’d like to find near-duplicate titles in the historical part. index_df = pd.read_csv(PATH_TO_DATA / "crypto_news_parsed_2013-2017_train.csv") query_df = pd.read_csv(PATH_TO_DATA / "crypto_news_parsed_2018_validation.csv") We’ll identify each title by some id, so reindexing. Also, there are quite a few fields in the dataset, we’ll take care only of the title field. index_df.index = [f'train_{i}' for i in range(len(index_df))] query_df.index = [f'val_{i}' for i in range(len(query_df))] index_df[['title']].head(2) title train_0 Bitcoin Price Update: Will China Lead us Down? train_1 Key Bitcoin Price Levels for Week 51 (15 – 22 ... query_df[['title']].head(2) title val_0 Paris Hilton’s Hotel Mogul Father to Sell $38 ... val_1 Playboy Sues Cryptocurrency Company for Breach... def preprocess(string, maxlen=500): tmp_string = string[:maxlen] tmp_string = re.sub(r"(\d+)", lambda x: num2words(int(x.group(0))), tmp_string) res = re.sub(r"[\W]+", "", tmp_string).lower() return res def _shingle(string, shingle_size=4): shings = { string[i : i + shingle_size] for i in range(len(string) - shingle_size + 1) } return set(shings) LSH from Datasketch lsh = MinHashLSH(threshold=SIMILARITY_THRESHOLD, num_perm=NUM_PERMS) Populating the index for id_, title in tqdm(index_df['title'].iteritems()): title_shingles = _shingle(preprocess(title), shingle_size=SHINGLE_SIZE) title_minhash = MinHash(num_perm=NUM_PERMS) for shing in title_shingles: title_minhash.update(shing.encode("utf8")) lsh.insert(id_, title_minhash, check_duplication=False) We’ve indexed that many titles: len(lsh.get_counts()[0]) 27462 If needed, we can serialize the LSH object with open("lsh.pkl", "wb") as f: pickle.dump(lsh, f) !du -hc lsh.pkl 35M lsh.pkl 35M total Get near-duplicates for the query data dup_dict = {} for id_, title in tqdm(query_df['title'].iteritems()): title_shingles = _shingle(preprocess(title), shingle_size=SHINGLE_SIZE) title_minhash = MinHash(num_perm=NUM_PERMS) for shing in title_shingles: title_minhash.update(shing.encode("utf8")) dups = lsh.query(title_minhash) dup_dict[id_] = dups (Optional step) Analyze true Jaccard similarity def jaccard_similarity(list1, list2): s1 = set(list1) s2 = set(list2) return len(s1.intersection(s2)) / len(s1.union(s2)) To access precision, we calculate the actual Jaccard similarity for the candidates identified by LSH. jaccard_sims = [] for id_, dups in tqdm(dup_dict.items()): if dups: shingle_query_title = _shingle( preprocess( query_df.loc[id_, "title"])) for dup_id in dups: shingle_indexed_title = _shingle( preprocess( index_df.loc[dup_id, "title"])) sim = jaccard_similarity( shingle_query_title, shingle_indexed_title) jaccard_sims.append(sim) plt.hist(jaccard_sims, bins=20); The distribution is nice, mostly, LSH indeed captures similar pairs. Precision (pd.Series(jaccard_sims) &gt;= SIMILARITY_THRESHOLD).sum() / len(jaccard_sims) 0.8334 NOTE: That’s the precision of the LSH algorithm. In practice, it’s very easy to have 100% precision with an additional effort of calculating the actual Jaccard similarity for the candidate pairs (as done above) and filtering out false positives, i.e. the candidates pairs with similarity below the predefined threshold. Recall This is a very computationally intensive step (that we are speeding up with multiprocessing) – we calculate all pairwise Jaccard similarities between 11k query titles and 27k indexed titles and see how many true near-duplicates the LSH algo missed. shingled_query_text = [ _shingle(preprocess(el)) for el in tqdm(query_df["Title"]) ] shingled_index_texts = [ _shingle(preprocess(el)) for el in tqdm(index_df["Title"]) ] Building pairwise Jaccard similarity matrix with multiprocessing from multiprocessing import Pool class JaccardPool(object): def __init__(self, archive_shingles): self.archive_shingles = archive_shingles def __call__(self, val_shing): """ :param val_shing: a shingle set to compare with each one in `archive_shingles` and to calculate Jaccard similarity """ return [ jaccard_similarity(val_shing, arch_shing) for arch_shing in self.archive_shingles ] try: pool = Pool(8) # on 8 processors engine = JaccardPool(archive_shingles=shingled_index_texts) sims = pool.map(engine, shingled_query_text) finally: # To make sure processes are closed in the end, even if errors happen pool.close() pool.join() Now we have a similarity matrix of size [11k, 27k] which we can use to compute recall, i.e. how many pairs with Jaccard similarity over the given threshold we managed to find. sim_matrix = np.vstack(sims) print((pd.Series(jaccard_sims) &gt;= SIMILARITY_THRESHOLD).sum() / (sim_matrix &gt;= SIMILARITY_THRESHOLD).sum()) 0.925 NOTE: We see that with short titles recall is pretty high. In reality though, for large datasets, recall is unknown (without a crazy carbon footprint from computing all pairwise text similarities). Literature “Mining massive datasets”, ch. 3 – the theoretical foundation of Locality-Sensitive Hashing A blog post on this topic Datasketch – a Python library implementing, among all, the MinHashLSH algorithm]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230627-practical-near-dup-detection/teaser.png%22%7D" /><media:content medium="image" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230627-practical-near-dup-detection/teaser.png%22%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Is the 99% accuracy claim in detecting chatGPT-generated content really trustworthy?</title><link href="https://yorko.github.io/2023/chatgpt-detectors/" rel="alternate" type="text/html" title="Is the 99% accuracy claim in detecting chatGPT-generated content really trustworthy?" /><published>2023-06-09T00:00:00+00:00</published><updated>2023-06-09T00:00:00+00:00</updated><id>https://yorko.github.io/2023/chatgpt-detectors</id><content type="html" xml:base="https://yorko.github.io/2023/chatgpt-detectors/"><![CDATA[<p><em>In this post I reason about the claimed accuracy of chatGPT detectors and why the task is far from being solved, in spite of those “99% accuracy” pitches that you hear.</em></p>

<hr />

<h2 id="self-claimed-metrics-for-chatgpt-detectors">Self-claimed metrics for chatGPT detectors</h2>

<p>A colleague pointed me to yet another chatGPT detector with 99% accuracy (on top of  GPTZero, DetectGPT, etc.). <a href="https://www.forbes.com/sites/ariannajohnson/2023/06/07/new-tool-can-tell-if-something-is-ai-written-with-99-accuracy/?sh=7e98e5ee5ed4">This Forbes article</a> overviews many of such detectors.</p>

<p>Let’s summarize the claimed self-metrics of the presented detectors:</p>

<ul>
  <li>TurnitIn: <strong>98%</strong> accuracy</li>
  <li>Copyleaks: <strong>99%</strong> accuracy</li>
  <li>Winston AI: <strong>99%</strong> accuracy</li>
  <li>AI Writing check: 80-90% accuracy</li>
  <li>OpenAI classifier: 26% recall, 91% specificity, with a small math exercise we get <strong>58.5%</strong> accuracy</li>
</ul>

<p>Wait. OpenAI hires the best talents in the whole world. The guys allegedly work for 60-90 hours/week. Is OpenAI really lagging behind the University of Kansas, Winston AI, etc.?</p>

<h2 id="why-is-the-task-hard">Why is the task hard?</h2>

<p>Of course, OpenAI is not lagging behind the University of Kansas and others. The task is actually much harder. While it’s easy to overfit a particular dataset and report 99% scores, it’s much harder to build a generalizable detector that would work for any domain, any language, and any generator (if we allow other models and do not focus solely on chatGPT).</p>

<p>This is a type of task that is hugely susceptible to data drift and model drift (if you’d like the detector to spot content produced by any LLM).</p>

<h2 id="how-do-i-get-99-accuracy-and-raise-money">How do I get 99% accuracy and raise money?</h2>

<p>Easy. Take a handful of papers, create their chatGPT versions (e.g. with paraphrasing), and train any BERT-type model. Bingo! ~90% scores even in a fair cross-validation setup.</p>

<p>That’s what the University of Kansas did, according to the mentioned Forbes <a href="https://www.forbes.com/sites/ariannajohnson/2023/06/07/new-tool-can-tell-if-something-is-ai-written-with-99-accuracy/?sh=7e98e5ee5ed4">article</a>.</p>

<blockquote>
  <p>The team of researchers selected 64 perspectives (a type of article) and used them to make 128 articles using ChatGPT, which was then used to train the AI detector.
The model had a 100% accuracy rate of identifying human-created articles from AI-generated ones, and a 92% accuracy rate for identifying specific paragraphs within the text.</p>
</blockquote>

<h2 id="how-detectors-failed-in-the-coling-2022-competition">How detectors failed in the COLING 2022 competition</h2>

<p>The argument above is supported by my experience with setting up a COLING 2022 competition track on the detection of AI-generated scientific papers. <a href="https://yorko.github.io/2022/detecting-generated-content/">Here</a> is the full blog post but the gist is the following:</p>

<ul>
  <li>all the models trained by contestants easily overfitted to the competition dataset, hence 99% F1 scores seen on the <a href="https://www.kaggle.com/competitions/detecting-generated-scientific-papers/leaderboard">leaderboard</a>;</li>
  <li>as one of the winners Domenic Rosati showed with his follow-up <a href="https://aclanthology.org/2022.sdp-1.27/">paper</a>, the models trained with the provided competition data (<code class="language-plaintext highlighter-rouge">DAGPap22</code> in the table below) generalize very poorly to a new similar dataset that Domenic had generated (<code class="language-plaintext highlighter-rouge">SynSciPass</code> in the table below): 31.4% in a binary classification task is worse than random (actually, if we flip the detector’s predictions, that’d yield 1-31.4% = 68.6%).</li>
</ul>

<div style="text-align:center">
<img src="/images/20230609-chatgpt-detectors/rosati_detector_generalization.png" width="500px" />
</div>

<h2 id="so-where-are-we-with-chatgpt-text-detection">So where are we with chatGPT text detection?</h2>

<p>This needs a whole new post but for now, I’d say: be critical, don’t buy these claims of “99% accuracy” and live with the fact that we’ll probably never know for sure if a piece of text is human-written or completely machine-generated. That’s the new reality.</p>]]></content><author><name>Yury Kashnitsky</name><email>yury.kashnitsky@gmail.com</email></author><summary type="html"><![CDATA[In this post I reason about the claimed accuracy of chatGPT detectors and why the task is far from being solved, in spite of those “99% accuracy” pitches that you hear.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230609-chatgpt-detectors/teaser.png%22%7D" /><media:content medium="image" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230609-chatgpt-detectors/teaser.png%22%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">chatGPT would almost pass my take-home assignment for the Machine Learning Engineer role</title><link href="https://yorko.github.io/2023/chatgpt-mle-take-home/" rel="alternate" type="text/html" title="chatGPT would almost pass my take-home assignment for the Machine Learning Engineer role" /><published>2023-01-30T00:00:00+00:00</published><updated>2023-01-30T00:00:00+00:00</updated><id>https://yorko.github.io/2023/chatgpt-mle-take-home</id><content type="html" xml:base="https://yorko.github.io/2023/chatgpt-mle-take-home/"><![CDATA[<p><em>In this post I describe how chatGPT can cover around 90% of the steps needed to successfylly crack a take-home assignment for the Machine Learning Engineer role.</em></p>

<hr />
<p>I know everyone is fed up with chatGPT by this point in time. But I find this story especially peculiar and worth sharing.</p>

<blockquote>
  <p>for those living in a cave or having 50h of meetings per week: chatGPT is <a href="https://openai.com/blog/chatgpt/">a new chatbot by OpenAI</a> based on GPT3, which is unprecedentedly good.</p>
</blockquote>

<p>I’ve just hired a Machine Learning engineer to join my team, and I used a take-home assignment for the selection process. I love giving take-home assignments for several reasons, one of them is that it’s a good proxy to assess the candidate’s ability to write clear code and communicate the findings. Some folks criticise such assignments as they make candidates spend their free time on job-related tasks. Fair enough, although I pay back by providing thorough written feedback on every candidate’s work (including code). I also enjoy this particular take-home assignment that I’m giving, as it’s based on my real pet-project: sentiment analysis of crypto news that I’ll describe in a different post (Russian speakers can enjoy the description on <a href="https://habr.com/ru/company/ods/blog/673376/">Habr.com</a>).</p>

<p>Having played a bit with chatGPT, I <a href="https://yorko.github.io/2022/chatgpt3-coref-resolution/">noticed</a> that it can do fairly well with machine learning course assignments. In particular, for <a href="https://mlcourse.ai/book/topic08/assignment08_implement_sgd_regressor.html">one</a> of <a href="http://mlcourse.ai">mlcourse.ai</a> assignments, chatGPT3 also created a perfect implementation of the Stochastic Gradient Regressor which works, and then it explained how to use the new Python class.</p>

<div style="text-align:center">
<img src="/images/20230130-chatgpt-mle-take-home/chatgpt_mlcourse_a1_q1.png" width="700px" />
</div>

<p>I also heard from one math professor that chatGPT cracks most of the exams in calculus that he gives to students. So he has to review his assignments and probably redo most of them.</p>

<p>Naturally, I decided to check how well chatGPT would do on the MLE take-home assignment.</p>

<p>So, I simply fed the whole assignment description into chatGPT, and got back a long and reasonable description of the steps to be taken. And look what happens when I go on asking the model to generate Python code for the above.</p>

<div style="text-align:center">
<img src="/images/20230130-chatgpt-mle-take-home/chatgpt_mle_task1.png" width="700px" />
</div>

<div style="text-align:center">
<img src="/images/20230130-chatgpt-mle-take-home/chatgpt_mle_task2.png" width="700px" />
</div>

<div style="text-align:center">
<img src="/images/20230130-chatgpt-mle-take-home/chatgpt_mle_task3.png" width="700px" />
</div>

<p>So, we get separate code snippets for model training and evaluation, scraping the website with test data and getting model predictions, and also the flask API prediction endpoint code. All snippets are followed by reasonable descriptions in plain English. What was especially pleasing: in the description of the task (in English), I gave two lines of code to read the data (Python), the model understood and copied those two lines.</p>

<p>You can see that the code won’t work as is – fitting Logistic Regression with raw texts is not correct, but we’ll get there.</p>

<p>Creating a Docker image is a no-brainer!</p>

<div style="text-align:center">
<img src="/images/20230130-chatgpt-mle-take-home/chatgpt_mle_task4.png" width="700px" />
</div>

<p>Then I asked to structure the code a bit better:</p>

<div style="text-align:center">
<img src="/images/20230130-chatgpt-mle-take-home/chatgpt_mle_task5.png" width="700px" />
</div>

<p>Next: ”What would be the content of the Readme file?” – and there it is, a clear Readme with all the instructions needed to run the API. It’s actually better than some 70% Readmes that I saw while hiring an MLE.</p>

<p>chatGPT understands the context, keeps the generated code in “memory” and can debug it! The error with skipped text representation step was fixed by the model itself.</p>

<div style="text-align:center">
<img src="/images/20230130-chatgpt-mle-take-home/chatgpt_mle_task6.png" width="700px" />
</div>

<p>One more example:</p>

<div style="text-align:center">
<img src="/images/20230130-chatgpt-mle-take-home/chatgpt_mle_task7.png" width="700px" />
</div>

<p>In the end, I made it work, having fixed 2-3 annoying bugs like the lack of permissions to install packages in the Docker file, missing <code class="language-plaintext highlighter-rouge">app.run()</code> command in the Flask app code, etc.</p>

<p>But if it were a live coding interview, chatGPT would’ve covered 90% of the way to a fully working ML application. An average candidate wouldn’t do better.</p>

<p>Of course, some would say that chatCPT is generally stupid, it cannot solve even a quadratic equation, and in the case of this take-home assignment, it <em>simply</em> reproduced something seen on  GitHub (as if it were not a miracle on its own!). But is it really so different from how a human would approach the assignment?</p>

<p>Here we are. I’ll finish the onboarding process for this MLE position and will get rid of this particular assignment. It’s actually a very good question what to replace it with.</p>

<p>Exciting times!</p>]]></content><author><name>Yury Kashnitsky</name><email>yury.kashnitsky@gmail.com</email></author><summary type="html"><![CDATA[In this post I describe how chatGPT can cover around 90% of the steps needed to successfylly crack a take-home assignment for the Machine Learning Engineer role.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230130-chatgpt-mle-take-home/teaser.png%22%7D" /><media:content medium="image" url="https://yorko.github.io/%7B%22teaser%22=%3E%2220230130-chatgpt-mle-take-home/teaser.png%22%7D" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>