Skip to main content

Windows shortcut

 Small blue diamondCTRL + A | Select All

Small blue diamondCTRL + ALT + V | Paste Special Small blue diamondCTRL + B | Bold Small blue diamondCTRL + C | Copy Small blue diamondCTRL + D | Fill Down Small blue diamondCTRL + F | Find Small blue diamondCTRL + G | Go to Small blue diamondCTRL + H | Replace Small blue diamondCTRL + I | Italic Small blue diamondCTRL + K | Insert Hyperlink Small blue diamondCTRL + N | New Workbook
Small blue diamondCTRL + O | Open File Small blue diamondCTRL + P | Print Small blue diamondCTRL + R | Fill right Small blue diamondCTRL + S | Save workbook Small blue diamondCTRL + T | Create Table Small blue diamondCTRL + U | Underline Small blue diamondCTRL + V | Paste Small blue diamondCTRL + W | Close window Small blue diamondCTRL + X | Cut Small blue diamondCTRL + Y | Repeat Small blue diamondCTRL + Z | Undo
Small blue diamondWIN + SHIFT + S | snipping screenshot



Popular posts from this blog

deploying Machine learning Model : pkl, Flask,postman

1)Create model and train          #  importing Librarys         import pandas as pd         import numpy as np         import matplotlib . pyplot as plt         import seaborn as sns         import requests         from pickle import dump , load         # Load Dataset         url = "http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"         names = [ "sepal_length" , "sepal_width" , "petal_length" , "petal_width" , "species" ]         # Loading Dataset         df = pd . read_csv ( url , names = names )         df . tail ( 11 )         df . columns         test = [         { 'sepal_length' : 5.1 , 'sepal_width' : 3.5 , 'peta...

spark-scala-python

 ############sparkcontest######33333 it is used in earlier spark 1.x //scala  import org.apache.spark.SparkConf     import org.apache.spark.SparkContext     val conf = new SparkConf().setAppName("first").setMaster("local[*]")     val sc = new SparkContext(conf) val rdd1 = sc.textFile("C:/workspace/data/txns") # python  from pyspark import SparkContext,SparkConf     conf = SparkConf().setAppName("first").setMaster("local[*])     sc = SparkContext(conf)      ## now days sparksession are used  ########range######### // in Scala val myRange = spark.range(1000).toDF("number") # in Python myRange = spark.range(1000).toDF("number") ###########where########## // in Scala val divisBy2 = myRange.where("number % 2 = 0") # in Python divisBy2 = myRange.where("number % 2 = 0") ###########read csv ########## // in Scala val flightData2015 = spark .read .option("inferSchema", "true") .o...

Binomial Distribution

  The binomial distribution formula is:                                                    b(x; n, P) =  n C x  * P x  * (1 – P) n – x Where: b = binomial probability x = total number of “successes” (pass or fail, heads or tails etc.) P = probability of a success on an individual trial n = number of trials Note:  The binomial distribution formula can also be written in a slightly different way, because  n C x  = n! / x!(n – x)! (this binomial distribution formula uses factorials  (What is a factorial? ). “q” in this formula is just the probability of failure (subtract your probability of success from 1). Using the First Binomial Distribution Formula The binomial distribution formula can calculate the probability of success for binomial distributions. Often you’ll be told to “plug in” the numbers to the  formul...