当前位置: 首页 > 知识库问答 >
问题:

键错误:[Int64Index…]dtype='int64]中没有一个在列中

巫经义
2023-03-14

我试图使用np.random.shuffle()方法对索引进行洗牌,但我一直收到一个我不理解的错误。如果有人能帮我解决这个问题,我将不胜感激。非常感谢。

当我在开始创建我的raw_csv_数据变量时,我尝试使用分隔符='、'和delim_空格=0,因为我认为这是另一个问题的解决方案,但它不断抛出相同的错误

    import pandas as pd 
    import numpy as np 
    from sklearn.preprocessing import StandardScaler

    #%%
    raw_csv_data= pd.read_csv('Absenteeism-data.csv')
    print(raw_csv_data)
    #%%
    df= raw_csv_data.copy()
    print(display(df))
    #%%
    pd.options.display.max_columns=None
    pd.options.display.max_rows=None
    print(display(df))
    #%%
    print(df.info())
    #%%
    df=df.drop(['ID'], axis=1)

    #%%
    print(display(df.head()))

    #%%
    #Our goal is to see who is more likely to be absent. Let's define
    #our targets from our dependent variable, Absenteeism Time in Hours
    print(df['Absenteeism Time in Hours'])
    print(df['Absenteeism Time in Hours'].median())
    #%%
    targets= np.where(df['Absenteeism Time in Hours']>df['Absenteeism Time 
    in Hours'].median(),1,0)
    #%%
    print(targets)
    #%%
    df['Excessive Absenteeism']= targets
    #%%
    print(df.head())

    #%%
    #Let's Separate the Day and Month Values to see if there is 
    correlation
    #between Day of week/month with absence
    print(type(df['Date'][0]))
    #%%
    df['Date']= pd.to_datetime(df['Date'], format='%d/%m/%Y')
    #%%
    print(df['Date'])
    print(type(df['Date'][0]))
    #%%
    #Extracting the Month Value
    print(df['Date'][0].month)
    #%%
    list_months=[]
    print(list_months)
    #%%
    print(df.shape)
    #%%
    for i in range(df.shape[0]):
        list_months.append(df['Date'][i].month)
    #%%
    print(list_months)
    #%%
    print(len(list_months))
    #%%
    #Let's Create a Month Value Column for df
    df['Month Value']= list_months
    #%%
    print(df.head())
    #%%
    #Now let's extract the day of the week from date
    df['Date'][699].weekday()
    #%%
    def date_to_weekday(date_value):
        return date_value.weekday()
    #%%
    df['Day of the Week']= df['Date'].apply(date_to_weekday)
    #%%
    print(df.head())
    #%%
    df= df.drop(['Date'], axis=1)
    #%%
    print(df.columns.values)
    #%%
    reordered_columns= ['Reason for Absence', 'Month Value','Day of the 
    Week','Transportation Expense', 'Distance to Work', 'Age',
     'Daily Work Load Average', 'Body Mass Index', 'Education', 
    'Children', 
    'Pets',
     'Absenteeism Time in Hours', 'Excessive Absenteeism']
    #%%
    df=df[reordered_columns]
    print(df.head())
    #%%
    #First Checkpoint
    df_date_mod= df.copy()
    #%%
    print(df_date_mod)

    #%%
    #Let's Standardize our inputs, ignoring the Reasons and Education 
    Columns
    #Because they are labelled by a separate categorical criteria, not 
    numerically
    print(df_date_mod.columns.values)
    #%%
    unscaled_inputs= df_date_mod.loc[:, ['Month Value','Day of the 
    Week','Transportation Expense','Distance to Work','Age','Daily Work 
    Load 
    Average','Body Mass Index','Children','Pets','Absenteeism Time in 
    Hours']]
    #%%
    print(display(unscaled_inputs))
    #%%
    absenteeism_scaler= StandardScaler()
    #%%
    absenteeism_scaler.fit(unscaled_inputs)
    #%%
    scaled_inputs= absenteeism_scaler.transform(unscaled_inputs)
    #%%
    print(display(scaled_inputs))
    #%%
    print(scaled_inputs.shape)
    #%%
    scaled_inputs= pd.DataFrame(scaled_inputs, columns=['Month Value','Day 
    of the Week','Transportation Expense','Distance to Work','Age','Daily 
    Work Load Average','Body Mass Index','Children','Pets','Absenteeism 
    Time 
    in Hours'])
    print(display(scaled_inputs))
    #%%
    df_date_mod= df_date_mod.drop(['Month Value','Day of the 
    Week','Transportation Expense','Distance to Work','Age','Daily Work 
    Load Average','Body Mass Index','Children','Pets','Absenteeism Time in 
    Hours'], axis=1)
    print(display(df_date_mod))
    #%%
    df_date_mod=pd.concat([df_date_mod,scaled_inputs], axis=1)
    print(display(df_date_mod))
    #%%
    df_date_mod= df_date_mod[reordered_columns]
    print(display(df_date_mod.head()))
    #%%
    #Checkpoint
    df_date_scale_mod= df_date_mod.copy()
    print(display(df_date_scale_mod.head()))
    #%%
    #Let's Analyze the Reason for Absence Category
    print(df_date_scale_mod['Reason for Absence'])
    #%%
    print(df_date_scale_mod['Reason for Absence'].min())
    print(df_date_scale_mod['Reason for Absence'].max())
    #%%
    print(df_date_scale_mod['Reason for Absence'].unique())
    #%%
    print(len(df_date_scale_mod['Reason for Absence'].unique()))
    #%%
    print(sorted(df['Reason for Absence'].unique()))
    #%%
    reason_columns= pd.get_dummies(df['Reason for Absence'])
    print(reason_columns)
    #%%
    reason_columns['check']= reason_columns.sum(axis=1)
    print(reason_columns)
    #%%
    print(reason_columns['check'].sum(axis=0))
    #%%
    print(reason_columns['check'].unique())
    #%%
    reason_columns=reason_columns.drop(['check'], axis=1)
    print(reason_columns)
    #%%
    reason_columns=pd.get_dummies(df_date_scale_mod['Reason for Absence'], 
    drop_first=True)
    print(reason_columns)
    #%%
    print(df_date_scale_mod.columns.values)
    #%%
    print(reason_columns.columns.values)
    #%%
    df_date_scale_mod= df_date_scale_mod.drop(['Reason for Absence'], 
    axis=1)
    print(df_date_scale_mod)
    #%%
    reason_type_1= reason_columns.loc[:, 1:14].max(axis=1)
    reason_type_2= reason_columns.loc[:, 15:17].max(axis=1)
    reason_type_3= reason_columns.loc[:, 18:21].max(axis=1)
    reason_type_4= reason_columns.loc[:, 22:].max(axis=1)
    #%%
    print(reason_type_1)
    print(reason_type_2)
    print(reason_type_3)
    print(reason_type_4)
    #%%
    print(df_date_scale_mod.head())
    #%%
    df_date_scale_mod= pd.concat([df_date_scale_mod, 
    reason_type_1,reason_type_2, reason_type_3, reason_type_4], axis=1)
    print(df_date_scale_mod.head())
    #%%
    print(df_date_scale_mod.columns.values)
    #%%
    column_names= ['Month Value','Day of the Week','Transportation 
    Expense',
     'Distance to Work','Age','Daily Work Load Average','Body Mass Index',
     'Education','Children','Pets','Absenteeism Time in Hours',
     'Excessive Absenteeism', 'Reason_1', 'Reason_2', 'Reason_3', 
     'Reason_4']

    df_date_scale_mod.columns= column_names
    print(df_date_scale_mod.head())
    #%%
    column_names_reordered= ['Reason_1', 'Reason_2', 'Reason_3', 
    'Reason_4','Month Value','Day of the Week','Transportation Expense',
     'Distance to Work','Age','Daily Work Load Average','Body Mass Index',
     'Education','Children','Pets','Absenteeism Time in Hours',
     'Excessive Absenteeism']

    df_date_scale_mod=df_date_scale_mod[column_names_reordered]
    print(display(df_date_scale_mod.head()))
    #%%
    #Checkpoint
    df_date_scale_mod_reas= df_date_scale_mod.copy()
    print(df_date_scale_mod_reas.head())
    #%%
    #Let's Look at the Education column now
    print(df_date_scale_mod_reas['Education'].unique())
    #This shows us that education is rated from 1-4 based on level
    #of completion
    #%%
    print(df_date_scale_mod_reas['Education'].value_counts())
    #The overwhelming majority of workers are highschool educated, while 
    the 
    #rest have higher degrees
    #%%
    #We'll create our dummy variables as highschool and higher education
    df_date_scale_mod_reas['Education']= 
    df_date_scale_mod_reas['Education'].map({1:0, 2:1, 3:1, 4:1})
    #%%
    print(df_date_scale_mod_reas['Education'].unique())
    #%%
    print(df_date_scale_mod_reas['Education'].value_counts())
    #%%
    #Checkpoint
    df_preprocessed= df_date_scale_mod_reas.copy()
    print(display(df_preprocessed.head()))
    #%%
    #%%
    #Split Inputs from targets
    scaled_inputs_all= df_preprocessed.loc[:,'Reason_1':'Absenteeism Time 
    in 
    Hours']
    print(display(scaled_inputs_all.head()))
    print(scaled_inputs_all.shape)
    #%%
    targets_all= df_preprocessed.loc[:,'Excessive Absenteeism']
    print(display(targets_all.head()))
    print(targets_all.shape)
    #%%
    #Shuffle Inputs and targets
    shuffled_indices= np.arange(scaled_inputs_all.shape[0])
    np.random.shuffle(shuffled_indices)
    shuffled_inputs= scaled_inputs_all[shuffled_indices]
    shuffled_targets= targets_all[shuffled_indices]

这是我尝试洗牌索引时不断遇到的错误:

KeyError                                  Traceback (most recent call last)
 in 
      1 shuffled_indices= np.arange(scaled_inputs_all.shape[0])
      2 np.random.shuffle(shuffled_indices)
----> 3 shuffled_inputs= scaled_inputs_all[shuffled_indices]
      4 shuffled_targets= targets_all[shuffled_indices]

getitem(self,key)中的~\Anaconda3\lib\site packages\pandas\core\frame.py 2932 key=list(key)2933 indexer=self.loc.\u convert\u to\u indexer(key,axis=1-

~\Anaconda3\lib\site-包\熊猫\core\indexing.py在_convert_to_indexer(自我,obj,轴,is_setter,raise_missing)1352 kwargs={'raise_missing':如果is_setter真1353
raise_missing}-

~\Anaconda3\lib\site packages\pandas\core\index.py in\u get\u listlike\u indexer(self,key,axis,raise\u missing)1159 self.\u validate\u read\u indexer(keyarr,indexer,1160
o.\u get\u axis\u编号(axis)-

~\Anaconda3\lib\site-包\熊猫\核心\indexing.py在_validate_read_indexer(自我,键,索引器,轴,raise_missing)1244引发KeyError(1245
u"[{key}]中没有一个在[{轴}]中"。格式(-

KeyError:"[Int64Index([560, 320, 405, 141, 154, 370, 656, 26, 444, 307,\n...\n429, 542, 676, 588, 315, 284, 293, 607, 197, 250],\n dtype='int64',长度=700)]中没有[列]"

共有3个答案

於宾白
2023-03-14

我也有这个问题。我通过将数据帧和序列更改为数组解决了这个问题。

尝试以下代码线:

scaled_inputs_all.iloc[shuffled_indices].values 
司空兴为
2023-03-14

可能有人在使用KFOLD进行机器学习时也会犯同样的错误。

解决方法如下:

点击这里观看解决方案

您需要使用iloc:

 X_train, X_test = X.iloc[train_index], X.iloc[test_index]

 y_train, y_test = y.iloc[train_index], y.iloc[test_index]
谷梁驰
2023-03-14

您使用loc函数创建了scaled\u输入\u all数据帧,因此它很可能不包含连续索引。

另一方面,您创建了shuffled_索引,将其作为一系列连续数字的洗牌。

请记住,scaled_inputs_all[shuffled_indices]获取具有等于shuffled_indices元素的索引值的scaled_inputs_all行。

也许你应该写:

scaled_inputs_all.iloc[shuffled_indices]

请注意,iloc提供基于整数位置的索引,而不管索引值如何,即您需要什么。

 类似资料:
  • 法典:- 错误 我试图在列和它们的前陈列室价格之间画一个箱线图。前展厅价格的值是分类的,因此,我首先将它们转换为整数,然后尝试绘制箱线图,但它会抛出错误,关键错误:“None of [Int64Index...] dtype='int64]在列中。

  • 我试图在管道上运行k-折叠交叉验证(标准化定标器,决策树分类器)。 首先,我导入数据。 然后对数据帧进行预处理 然后对特征和目标进行切片 并使用SMOTE来平衡数据 这是问题的一部分。 错误代码

  • 这是我的数据帧: 我试着用它做一个非常简单的情节: 但我一直收到一条关键错误消息: 我尝试将列[a]转换为日期时间,但仍然收到相同的错误消息。

  • 将测试和列车数据输入ROC曲线图时,我收到以下错误: KeyError:“[Int64Index([0,1,2,…dtype='int64',length=1323])中没有一个在[columns]中” 错误似乎是说它不喜欢我的数据格式,但它在第一次运行时起作用,我无法让它再次运行。 我是否错误地拆分数据或将格式错误的数据发送到函数中? 阅读几个StackOverflow帖子与相同的KeyErro

  • 下面是一个小版本的代码,其中我得到了这个错误: KeyError:"[Int64Index([...],dtype='int64')]都不在[列]" '...' 是一系列似乎与我的X和y数据帧的索引匹配的数字。 我使用Mlens包在一个非常大的数据集上与SuperLearner一起建模(因此可伸缩性非常重要)。我的目标是使用数据帧结构,而不是Numpy数组。这将解决下游问题。 到目前为止,我已经探

  • 有一个323列和10348行的数据帧。我想用下面的代码用分层k-Fold来划分它 但是我得到了以下错误 有人告诉我为什么会出现这个错误以及如何修复它吗