ALPDallas Keuchel1 Because MLB has expanded (offering more starting slots at a given position, and therefore the opportunity for more variance relative to average) and the quality of defensive metrics has improved (allowing metric creators to be more confident in handing out highly positive ratings), the average defensive quality of an “All-Defense” team selected purely using metrics has gradually increased since 1958.4That is the first year when two players at each position, one in each league, got Gold Gloves. Gold Gloves were first awarded in 1957, with one at each position across both leagues. But the average quality of actual Gold Glove winners’ fielding had stayed relatively flat for over 50 years — right up until the introduction of the SDI.The gap between the real Gold Glove winners and what we’ve defined as the sabermetric ideal reached an all-time high of 14 runs in 2005. That year, voters infamously gave Derek Jeter a Gold Glove for what was one of the worst defensive seasons ever at shortstop according to the numbers. Defensive metrics were improving all the time, but the voters didn’t appear to be paying attention.The tide turned, however, with the adoption of the SDI in 2013. Immediately upon its inclusion in the voting process, the average statistical quality of a Gold Glove winner skyrocketed, from 10 runs below the sabermetric ideal in 2012 to half that a year later. Obviously, this is a bit of a circular finding: We’re judging Gold Glove winners against a statistical standard determined by one of the same metrics that goes into the SDI itself. But the leap between the pre- and post-SDI eras is still striking.So striking, in fact, that it even goes beyond what would be expected from the direct influence SDI has on Gold Glove voting by dictating 25 percent of the vote.“We think it’s influenced the managers’ and coaches’ voting,” Vince Gennaro, SABR’s president and a member of the SDI committee, said about SDI in a telephone interview Tuesday. On top of the SDI numbers’ algorithmic role in the voting process, Gennaro believes they have had a pronounced effect in combating incumbency bias and other reputation-based flaws in the human side of the voting. In other words, because they’re so widely available (they’re even listed on the ballots given to Gold Glove voters), the advanced metrics have also influenced the other 75 percent of the vote they don’t directly control.“[Say] you’ve got a guy who’s not a perennial Gold Glove guy, but he really caught your eye this year,” Gennaro said. “Then you see he had 17 runs saved, versus a guy who won it last year at 7. I think it could be very much a validating thing, and it might tip you to make that vote.”Because it essentially involves measuring players against the plays they didn’t make, defense has always been one of the toughest areas of baseball to evaluate statistically. And the absence of detailed defensive data in the past might have caused voters to err on the side of a reputation that was no longer valid (or never was deserved). But now, advanced metrics provide evidence to either support or tear down commonly held beliefs about a player’s defensive prowess, giving them a large amount of sway over both the human and computerized aspects of the Gold Glove process.This isn’t to say that every Gold Glove now conforms to the advanced metrics. For instance, Kansas City Royals teammates Eric Hosmer and Salvador Perez won this year despite ranking sixth and seventh at their respective positions in SDI. But aside from Hosmer and Perez, every other Gold Glover ranked in the top three in SDI at his position, and 10 of the 18 winners ranked first. AL3BManny Machado1 ALLFYoenis Cespedes1 NLRFJason Heyward1 NL1BPaul Goldschmidt2Brandon Belt AL2BJose Altuve3Ian Kinsler LEAGUEPOSITIONGOLD GLOVE WINNERSDI RANKSDI LEADER (IF DIFFERENT) NL3BNolan Arenado1 ALRFKole Calhoun1 NLCFA.J. Pollock3Odubel Herrera NL2BDee Gordon3Danny Espinosa From Tampa Bay’s Kevin Kiermaier to San Francisco’s Brandon Crawford, it was hard to tell this year’s list of Gold Glove winners, announced Tuesday night, from a list of players with the best advanced defensive metrics. That’s no coincidence: Since 2013, Rawlings, the mitt-maker that annually hands out the Gold Glove hardware, has incorporated a statistical component known as the SABR1Which stands for the Society for American Baseball Research. Defensive Index (SDI), giving it at least 25 percent weight in the voting. (The rest of the vote belongs to Major League Baseball managers and coaches.)2SDI gets “at least 25 percent” because SDI can receive more weight — as much as 30 percent, in practice — depending on how many human voters fail to send in their ballots. But the impact of analytic tools is probably undersold by that number. Instead, the case can be made that the advanced stats have almost completely taken over the Gold Glove competition.You can see this effect in how much more closely recent Gold Glove winners have matched the selections that would have been made using only defensive metrics:3In this case, using Baseball-Reference.com’s measure of fielding runs above average, adjusted such that the average MLB player (across all positions) has a value of 0 runs saved. The precision of Baseball-Reference.com’s metric — which uses defensive runs saved for seasons since 2003 and Total Zone for years before that — has changed over time. In recent years, it uses metrics that correspond very closely with those that make up SDI. NLLFStarling Marte2Christian Yelich NLCYadier Molina3Buster Posey ALCSalvador Perez7Caleb Joseph NLSSBrandon Crawford1 ALCFKevin Kiermaier1 NLPZack Greinke1 ALSSAlcides Escobar1 AL1BEric Hosmer6Mike Napoli Likewise, it isn’t completely clear that a wholesale metric takeover of the Gold Gloves would be a good thing. While we can measure whether Gold Gloves are getting closer to the sabermetric ideal, it will take further research to see whether that development means having a Gold Glover in the field leads to his team playing better defense.But since the introduction of the SDI, the Gold Glove process has undeniably become more quantitative. And that’s a pretty big shift for an award that used to be as allergic to meaningful statistics as any in the game.
Meet Miss Teen TCI contestants Results in for 2nd Miss Teen TCI; called a close contest Recommended for you Facebook Twitter Google+LinkedInPinterestWhatsAppProvidenciales, 22 Dec 2014 – Miss Teen TCI launches a dramatic presence on Facebook to point to her platform which is standing against domestic violence. Malique Ferrette is set to take off in the New Year to represent the Turks and Caicos Islands on an international level, but before she does, the intelligent beauty is making her presence felt by drawing attention to the dangers of domestic violence. In one of her charges to her followers on social media, Miss Teen TCI says: “Ladies, you are so much more than what ‘they’ say! Your body is beautiful, that mind of yours is powerful. Your potential is more than ‘they’ can ever conceive. Stop what you’re doing and think about that! You are a queen and your worth, unmentionable. Keep your head up and forget ‘their’ words; for they do not capture the splendor of YOU.” Miss Teen Universe Malique Ferrette who won the title in late September, will attend Miss Teen Universe in Guatemala in February. Related Items:domestic violence, Malique ferrette, Miss teen tci Facebook Twitter Google+LinkedInPinterestWhatsApp Busy holiday weekend
7 min read In this series, The Way We Work, Entrepreneur Associate Editor Lydia Belanger examines how people foster productivity, focus, collaboration, creativity and culture in the workplace.If your criminal record is a clean slate, you probably don’t think too much about background checks. But probability suggests it’s almost certain that someone you work with has one.What constitutes a criminal record can range from minor to major crimes, from fishing without a license to speeding to driving under the influence to murder. But historically, some background check processes haven’t distinguished among offenses of varying degrees.The Ban the Box movement has pushed to remove the “Have you ever been convicted of a crime?” checkbox on job applications that can automatically land a candidate’s app in the trash, regardless of what that crime might have been. A company might look at its large candidate pool and decide to pass on anyone with a criminal record, unwilling to spend the time and human resources to vet them.Related: 7 Practical Ways to Reduce Bias in Your Hiring ProcessThe time-consuming nature of background checks was the initial inspiration for background check software company Checkr. Co-founders Daniel Yanisse and Jonathan Perichon were working as engineers at on-demand delivery startup Deliv back in 2013, when the “gig economy” had first taken shape. Suddenly, consumers were trusting strangers employed for contract work by startups to clean their homes, drive them around, deliver their dinner and more. Deliv needed lots of drivers, but existing background check technology made the screening process take a week or more.Today, using AI, Checkr automates background checks, and the process only takes a day or two. The San Francisco-based company’s 10,000-plus customers include gig-economy giants such as Uber, Lyft, Instacart, Postmates and Grubhub, as well as staffing companies, retailers and more.Background checks vary in their thoroughness. Some include driving records, employment verifications and drug tests. Checkr gathers data from various sources, including criminal records from courthouses at the county, state and federal levels. Many of these databases are digital, but for those that still keep records on paper, Checkr dispatches a contractor to manually collect the information. Employers typically pay $35 per applicant screening. Checkr pricing varies based on the information requested and the volume of checks a customer runs.Checkr also adds a layer of quality assurance, confirming that, say, two guys with the same name, birth date and state of residence aren’t confused for each other. Finally, it’s programmed to account for differing background check regulations at the state level.“That would take multiple days for humans to review and do the work,” Yanisse says, “so we’ve automated that with algorithms, which is more accurate and faster.”But the Checkr team didn’t set out to help the Ubers of the world reject convicts faster. From day one, Yanisse says they’ve designed the software to reduce bias in hiring and give more qualified candidates a chance at employment. He points out that the U.S. economy loses roughy $87 billion annually because people with criminal records can’t get hired, and that nearly 75 percent of formerly incarcerated individuals are still unemployed a year after release, according to ACLU research.People who have criminal records but don’t have quality jobs that pay a living wage are more likely to commit a crime again. That translates to more people on government assistance or in jail — and more taxpayer money to pay for it. Meanwhile, employment retention rates are higher among people with criminal histories than the general population, which translates to lower long-term recruitment costs for companies who hire them, according to the ACLU.Related: Evernote’s Head of ‘People’ Explains How the Company Employs Tech Tools to Reinforce Its Values“When you look at statistics, it’s extremely rare to have workplace violence or crime, especially if you have a good interview process and good HR practices in place,” Yanisse says, explaining that the background check industry has long relied fear-mongering as a marketing tactic. “So we just want to rebalance.”Checkr also corrects for variation and biases among humans who are trying to hire fairly. It is unrealistic to hold a large company to one hiring standard. If someone has the discretion to reject or advance a candidate, he or she might have trouble standardizing the approach.Each Checkr client can customize the software to ignore or never surface data about certain criminal offenses that they don’t deem relevant to the job at hand. Someone applying to be a customer service representative, for example, could perform the job perfectly well even with traffic violations on their record. A ride-hailing driver? Perhaps not. Companies can also specify time windows: If the crime happened before a certain year, it’s no longer a concern.“In reality, when you would get a background check report in the past, if there was a long list of things, even if it was just a long list of traffic variations or things like that, you probably were going to be biased,” Yanisse says. “Like, ‘Ooh, there are a bunch of flags here. I’m not sure I want to take the risk.’”When it comes to AI, many people worry about the potential for AI to reflect the biases of the human software engineers who built it. But Yanisse insists that doesn’t come into play with Checkr, because the AI is not determining if a candidate is qualified for the job or make hiring decisions for its user companies.“We are not predicting if the person is going to be a good fit for the job,” Yanisse says. “We’re using AI for classifying data. Like, is this a driving violation or is this a physical crime? Those things are more fact-based.”Yanisse likens Checkr to the advent of the credit score. Pre-FICO and credit bureaus (the automation of credit score determinations), a loan officer at a bank could pass judgement or discriminate against a credit applicant.“It removes the ability for employers to use a background check as a proxy to screen people out,” says David Patterson, Checkr’s head of communications. “Historically, employers would often tell people who were minorities that they didn’t get the job and use the background check as an excuse, like, ‘Oh we couldn’t hire them because they had this like minor thing on their record.’ But that may have not been the actual reason.”At the moment, Checkr is focused on measuring its impact and calculating how many candidates would have been rejected with or without its software. Last year, the company reports that it helped 8,000 candidates get accepted instead of declined. For 2018, its goal is 80,000. It aims to achieve that not only by signing on more clients, but getting its existing clients to fine-tune their use of the tool.“Initially, I think it was quite binary how decisions were made. Either you’re clean and you haven’t done anything and you’re a good person, or you have some flags in your background, so we’re not going to hire you,” Yanisse says. “But when you get into the details, you realize that that’s not the case. There’s no good or bad people, there’s a whole spectrum. People make mistakes, and there are different severities of mistakes.”Related video: How Making Naïve Mistakes Led Me to Ultimate Success Attend this free webinar and learn how you can maximize efficiency while getting the most critical things done right. May 30, 2018 Free Webinar | Sept 5: Tips and Tools for Making Progress Toward Important Goals Register Now »